name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
334342 | Signed Digit Addition and Related Operations with Threshold Logic. | AbstractAssuming signed digit number representations, we investigate the implementation of some addition related operations assuming linear threshold networks. We measure the depth and size of the networks in terms of linear threshold gates. We show first that a depth-$2$ network with $O(n)$ size, weight, and fan-in complexities can perform signed digit symmetric functions. Consequently, assuming radix-$2$ signed digit representation, we show that the two operand addition can be performed by a threshold network of depth-$2$ having $O(n)$ size complexity and $O(1)$ weight and fan-in complexities. Furthermore, we show that, assuming radix-$(2n-1)$ signed digit representations, the multioperand addition can be computed by a depth-$2$ network with $O(n^3)$ size with the weight and fan-in complexities being polynomially bounded. Finally, we show that multiplication can be performed by a linear threshold network of depth-$3$ with the size of $O(n^3)$ requiring $O(n^3)$ weights and $O(n^2 \log n)$ fan-in. | Introduction
High performance addition and addition related operations, such as multiplication, play
an important role in the computer based computational paradigm. A major impediment
to improve the speed of arithmetic execution units incorporating addition and addition
related operations is the presence of carry and borrow chains. One solution for the elimination
of carry chains is the use of redundant representation of operands, proposed by
Avizienis in [1]. The Signed Digit (SD) number representation method allows, under certain
assumptions, the so called "totally parallel addition" [1], which limits the propagation
of the carries at the expense of some overhead in data storage space and in processing time
for the conversion of the results and potentially of the operands.
The redundant representation operates as follows. For any radix r - 2, a sign-digit
integer number represented with n digits has the algebraic
Each digit x i of the X number can assume its value in the digit
set \Sigma ffg. The cardinality of the set \Sigma r is 2ff
and the maximum digit magnitude ff must satisfies the relations stated in Equation (1) 1 .
In order to have minimum redundancy and as consequence minimum storage overhead
one can assume that
ri
but in order to break the carry chain, i.e., to have "totally
parallel addition", the value of ff should satisfy the relations stated in Equation (2).
Based on sign-digit representation, a number of high-speed architectures 2 have been re-
ported, see for example [2], [3], [4], [5], [6]. Thus far all the investigations in SD arithmetic
architectures assumed logic implementation with technologies that directly implement
Boolean gates. Currently other possibilities exist in VLSI for the implementation
of Boolean functions using threshold devices in CMOS technology [7], [8], [9], [10]. In
assuming Threshold Logic (TL) the basic processing element can be a Linear Threshold
1 Note that for a given radix r it might be that ff is not unique, therefore there can be more than one possible
digit set.
On-Line, and parallel.
Gate 3 (LTG) computing the Boolean function F (X) such that:
where the set of input variables and weights are defined by
and
Such a LTG contains a threshold value,
/, a summation device, \Sigma, computing F(X) and a threshold element, T , computing
Given that may be promising, it is of interest to investigate new schemes applicable
to such a new technology. To this end, assuming binary non-redundant representations, a
number of recent proposals regarding addition and multiplications, see for example [13],
[14], [15], [16], [17], [18], [19], [20], have been developed that assume threshold rather than
Boolean logic.
Thus far there are no studies assuming redundant representations and TL. In this paper
we assume SD number representation and we investigate linear threshold networks for
addition, multi-operand addition, and multiplication. We assume that the operands are
n-SD numbers and we are mainly concerned in establishing the limits of the circuit designs
using threshold based networks. We measure the depth and the size of the networks we
propose in terms of LTGs.
The main contributions of our proposal can be summarized as:
ffl Any SD symmetric function can be implemented by a depth-2 feed-forward Linear
Threshold Network (LTN) with O(n) size, weight and fan-in values.
ffl Assuming radix-2 redundant operand representation, the addition of two n-SD numbers
can be computed by a depth-2 LTN with O(n) size and O(1) weight and fan-in values.
Assuming operand representation the multi-operand addition
of n n-SD numbers can be computed by an explicit depth-2 LTN with the size in the order
of O(n 3 ), with the maximum weight value in the order of O(n 3 ), and the maximum fan-in
value in the order of O(n 2 ).
3 Such a threshold gate corresponds to the Boolean output neuron introduced in the McCulloch-Pitts neural
model [11], [12] with no learning features.
Assuming representation the multiplication of two n-SD numbers
can be computed by an explicit depth-3 LTN with the size in the order of O(n 3 ). The
maximum weight value is in the order of O(n 3 ) and the maximum fan-in value is in the
order of O(n 2 log n).
We also note here that while our results are primarily theoretical, there exist technology
proposals, see for example [10], which may implement at least some of the proposed
schemes, e.g., two operand addition.
The presentation is organized as follows: In Section 2 we discuss background information
on Boolean symmetric functions and their implementation with
preliminary results; In Section 3 we present schemes for the addition of radix-2
SD numbers; In Section 4 we study the multiplication of radix-2 SD numbers and we
present schemes for the multi-operand addition and the multiplication of
SD numbers; We conclude the presentation with some final remarks.
II. Background and Preliminaries
In order to make this presentation self consistent we introduce in this section the definition
of Boolean symmetric functions and some based implementation techniques that
we will use in our investigation.
Definition 1: A Boolean function of n variables F s is symmetric if and only if for any
permutation oe of !
For any n input variable symmetric Boolean function F s the sum
ranges from
(all input variables are 0) to n (all input variables are 1). Inside this definition domain
[0; n] there are r intervals [q
to 1 and outside these intervals the function is 0. This is graphically depicted in Figure 1
and formally described by Equation (4).
elsewhere
The number of intervals depends on the function definition and we proved elsewhere [21]
that for any Boolean symmetric function the maximum number of intervals r is upper
June 23, 1999
c
c
s
Fig. 1. Interval Based Representation of F s
bounded by d n+1e.
Definition 2: A Boolean function of n variables F gs is generalized symmetric 4 if it
entirely depends on
the weighted sum of its input variables, with w i ,
In essence a generalized symmetric Boolean function F gs is either a symmetric Boolean
function or a non symmetric Boolean function that can be transformed into a symmetric
Boolean function by trivial transformations, e.g., assignation of different weight values
to the inputs or input replication. F gs can be described as a function of
and the definition domain extends from [0; n] to [0; - max ], where -
. All the
results that stand true for symmetric Boolean functions can be applied also to generalized
symmetric Boolean functions.
To clarify the generalized symmetric Boolean function concept let consider the 4 2-
bit multi-operand addition producing a 4-bit result. The truth table and the schematic
diagram for such a function are depicted in Figure 2. First it can be observed that in order
to produce the sum at bit position 0 we need to consider only the bits in the first column
(LSB position). It can be easily verified that the Boolean function computing the sum's
because it can be clearly determined by the integer
value of
4 This definition and also Definition 1 are not specific to functions with Boolean input variables. The symmetry
is an intrinsic property of the function and do not depend on the input variable type. Therefore they apply also
to functions of other type of input variables, e.g., integer, real.
5 The weights w i can be also real numbers but we have assumed here integer values because of practical considerations
related to the LTG fabrication technology [7], [10].
Decimal
Sum
Binary Sum1y1 y0
2Fig. 2. 4 2-bit Multi-operand Addition
Fig. 3. Interval Based Representation of s 1
This property however does not hold
for the other sum bits. For example the Boolean function s 1
not a symmetric Boolean function as its value depends on the positioning of the inputs
and can not be always correctly determined from the x
value.
The s 1 function is however a generalized symmetric Boolean function as it can be made
to be a symmetric Boolean function if a weight of 2 is associated to the input bits in
the column 1. Consequently, the s 1 sum bit can be computed by a symmetric Boolean
function s 1 (-), where based
representation is graphically depicted in Figure 3.
Given that symmetric (generalized or not) functions constitute a frequently used class
of Boolean functions and because they are expensive to implement in hardware, in terms
of area and delay, their implementation with feed-forward LTNs have been the subject of
numerous theoretical and practical scientific investigations see for example [22], [23], [24],
[25], [16], [21].
The most network size efficient approach known so far for the depth-2 implementation of
symmetric Boolean function with TL is the telescopic sum method and it was introduced by
Minick in [23]. The method can be used for the implementation of any Boolean symmetric
function and produces depth-2 feed-forward LTNs with the size in the order of O(n),
measured in terms of LTGs, and with linear weight and fan-in values. We shortly describe
this method by introducing the following lemma.
Lemma 1: [Minick 0 61] Any Boolean symmetric function F s described
as in Equation (4), can be implemented by a two-layer feed-forward LTN with a size
complexity measured in terms of LTGs in the order of O(n) as follows:
r
where:
A formal proof of Lemma 1 and implementation examples can be found in [26].
Given that we assume SD operands (that is we consider functions with no Boolean input
variables) we need to map them into general Boolean functions. In order to achieve this
mapping we have first to choose a representation for the SDs. One possible representation
is the 2's complement [27] 6 .
Given a fixed radix r a SD number is represented as (s
presentation we will consider that any digit s i can assume a value in the symmetric 7 digit
set with the maximum digit magnitude ff satisfying
the Equation (1) or (2). The cardinality of the digit set is 2ff consequently any
6 There are also other possibilities but the 2's complement notation seems to be the natural choice. Later on
we will suggest that in some particular cases other codification schemes are more convenient as they lead to the
reduction of the network depth.
7 The symmetry of the digit set is not a restriction. We make this assumption for simplicity of notations. Digit
sets which are not symmetric can be also considered without changing the results we report in the next chapters.
can be binary represented by a k-tuple
and x l 2 f0; 1g, for
For the particular case of the 2's complement codification of the SDs the dimension of the
k-tuple can be computed also as 1)e. For each s i ,
values of x l , are to be computed such as s
Assuming 2's complement representation codification of the SDs we will prove (in the
following lemma) that any generalized symmetric SD function can be implemented by a
depth-2 LTN with polynomially bounded size.
Lemma 2: Let F(s an arbitrary generalized symmetric function of
and ff satisfying Equation
(1) or (2) for a fixed radix r. F can be implemented by a LTN with the cost in the
order of O(n).
Proof: Given that F is generalized symmetric it can be expressed as in Equation (6),
are arbitrary integer constant weights.
Under 2's complement representation of the SDs s i Equation (6) is equivalent to:
\Gamma2
dlog
As a consequence of Equation (7) F is expressed as a generalized Boolean symmetric function
of n(1+dlog (ff then it can be computed with the scheme in Lemma 1.
The size of the LTN implementing F depends, on the number of intervals on the definition
domain. Given that in our case the maximum absolute value any digit can assume
is the argument of F as described in Equation (7), in the worst case scenario,
can take any value inside the definition domain [\Gamma
Consequently the
maximum number of intervals is upper bounded by
. Because we assumed
that the weights w i and the radix r are arbitrary integer constants the LTN cost is in the
order of O(n). Obviously the weight and fan-in values are in the order of O(n).
I
Totally Parallel Addition at Digit Position i
III. Signed Digit
In this section we addition schemes using a "totally parallel" [1] addition
approach. We use a fixed radix of 2 and the corresponding digit set f1; 0; 1g,
where 1 denotes \Gamma1. We consider two n-SD integers
and
and propose two schemes to compute the sum
represented as
Traditionally, in the context of Boolean logic, the 2\Gamma1 addition of radix-2 SD represented
operands has been achieved with two step approaches [2], [27], [3]: First an intermediate
carry c i and an intermediate sum s i satisfying the equation x i +y are computed
for each digit position i. Second the sum digit z i , computed as
In our approach we will use the "totally parallel" addition described in Table I [3].
We also assume that any digit x in the set f1; 0; 1g is represented in the 2's complement
notation by two bits, as is shown in Table II. Note that in this codification the combination
not allowed and can not appear during the computations.
It can be observed in the Table I that the digits in position contribute into the
June 23, 1999
II
Digit Codification for x 2 f1; 0; 1g
computation of s i and c i only by their sign. Therefore what we have to compute in order
to implement the scheme presented in the Table are the functions s
These two functions, as they directly implied from the Table, are
not symmetric in their input variables. They can be made symmetric by computing the
weighted sum of the inputs - s stated by Equation (8), such that the Equations (9, 10) with
proper determined weights w i and w hold true for all the possible input combinations.
We compute the weights w i and w i\Gamma1 by taking into consideration the specific structure
of the functions s i and c i . The choice for w straightforward. Given that for
the digits in position account only the x \Gamma bits the minimum value of
should be equal 8 to 3. Consequently the weighted sum - s in the Equation (8) can be
computed as \Gamma6(x \Gamma
and the description of the symmetric
functions computing s i and c i is described in Table III.
From the Table we derive the interval description (similar to the description of Equation
(4)) for the required Boolean functions:
has to be greater than the maximum value that can be assumed by w
which in this case is
2.
III
c i and s i as Symmetric Functions of x
Assume that [ff]
are computed as in Equations (15,16).
We introduce next an implicit depth-1 implementation technique based on the fact that
any symmetric Boolean function F s defined as in Equation (4) can be expressed as:
r Q
r (17)
where: q
concatenation
represent logical OR, and AND respectively.
Lemma 3: Any Boolean symmetric function F s described in Equation (17),
can be implemented by an implicit depth-1 feed-forward LTN with the size in the order
of O(n) as follows:
Proof: To verify Equation (18) it will be shown that F s is indeed 1 when the sum
lies inside an interval [q for a specific j and that F s is 0 when there is no
j such that - 2 [q
In this case Q
as needed.
ffl Case 2: There is no r such that - 2 [q
In this case there are three possibilities:
We will prove that in all of them F s is 0 as needed. In the first sub-case
i.e., is 0. In the second
sub-case
Consequently F
i.e., is 0. In the last sub-case
Consequently F i.e., is 0.
Given that any q
j can be obtained with a LTG computing sgnf- \Gamma q j g and any Q
with a LTG computing -g the entire network is built with 2r LTGs, i.e., the
implementation cost is in the order of O(n). All the input weights are 1 and the fan-in for
all the gates is n.
The method presented in Lemma 3 can be also applied for the implementation of generalized
symmetric functions. Given that in this case the number of intervals is upper
bounded by
, the implementation cost will be upper bounded by 2
i.e., is still in the order of O(n).
Remark 1: The scheme in Lemma 3 can be changed into an explicit one by connecting
all the outputs of the gates computing q
j to a gate with the threshold value of
1. The output of this extra gate will provide explicitly the value of F s after the delay
of 2 TGs.
Remark 2: If q
1 is always 1 and Equation (18) becomes:
r is always 1 and Equation (18) becomes:
If
are always 1 and Equation (18) becomes:
It should be noted that if used in cascaded computation the method described in Lemma 3
increases the fan-in of the next stage because the value of the function F s is carried by 2r
signals.
From the Table III and using the Equation (17) the four Boolean symmetric functions
describing the computations of the intermediate sum s i and carry c i can be expressed by
the following:
By applying the Lemma 3 we derive from the Equations (22,23,24,25) an implicit depth-1
implementation of the first step of the "totally parallel" addition scheme. Because [\Gamma6]
and
are always 1 and Remark 2 we have that:
In order to make more intuitive the way this implicit scheme is working we depicted in
Figure
4 the regions in which the threshold signals [ff]
are active for each of the
four signals s
The second step of the "totally parallel" addition is the computation of z
Following the reasoning used for the computation of s
Fig. 4. Description of Threshold Signals for s
where:
d
c
Theorem 1: Assuming radix-2 SD operand representation and the SD codification in
Table
II the addition of two n-SD numbers can be computed by an implicit depth-2 LTN
with weight value of 6 and a maximum fan-in of 12.
Proof: The quantities d
i in Equation (33,34) can be computed, by doing
the proper substitutions, using Equations (26,27,28,29), as:
d
ni
c
\Gamma2
\Gamma[\Gamma3]
\Gamma[\Gamma4]
June 23, 1999
Consequently, the Equations (30,31) provide an implicit depth-2 implementation scheme
for the computation of the sum digit z i . On the first level of the network we compute
for each digit position i,
use 9 TGs per digit. On the second level we need
2 TGs for each digit position i, in order to compute d
i as
stated by Equations (35,36). Therefore the network producing all the sum digits can be
constructed with 11n TGs. For the digit position we have to produce the carry-out.
This can be explicitly generated in depth-2 at the expanse of two TGs computing:
Therefore the cost of the entire addition network is 11n 2, i.e., of O(n) complexity.
Obviously the weight values and fan-in values do not depend on n. The maximum fan-in
is 12 and the maximum weight value is 6, i.e., having O(1) complexity.
Note that for this scheme the value of z
i is carried by two signals and one threshold value
and z \Gamma
i is actually depth-2 explicitly computed. If used in cascaded computation this
method will increase with 1 the fan-in of the next stage and will contribute with 1 to the
threshold value of some of the gates in the next stage.
If we compare the scheme introduced in Theorem 1 with the depth-2 scheme presented
in [28] which has a network size of 25n + 5, a maximum fan-in of 26, and a maximum
weight value of 123, one can observe that we achieved a substantial reduction in network
size, weight, and fan-in values for the same network depth. However the new depth-2
scheme is implicit and this fact increases the fan-in of the stage requiring as inputs the
digits z i . In the remainder of this section we show that it is possible to explicitly compute
the sum while maintaining the network depth and complexity.
The method described by Equations (30,31) is implicit because of the way we compute
the final sum bit z
. All the other signals, i.e., z \Gamma
are explicitly computed
with two levels of TGs. Consequently, Equation (30) has to be modified to appear as
inducing fundamental changes to the Equations (31,37,38).
To this end we assume that in order to represent a SD x in the set f1; 0; 1g we use the
codification described in Table IV instead of the 2's complement codification in Table II.
IV
New Digit Codification for x 2 f1; 0; 1g
c i and s i as Functions of - s
Note that with this new codification the combination x allowed
and can not appear during the computations.
Under this assumption the quantity - s can be expressed as in Equation (39) and it can
take values in the definition interval [\Gamma12; 8].
Thus the first step of the "totally parallel" addition scheme is described in Table V.
From the Table it can be deduced that the Boolean symmetric functions describing the
June 23, 1999
computations of the intermediate sum s i and carry c i are as follows:
As is was proved in the Lemma 3 from these equations we can derive an implicit depth-1
implementation of the first step of the "totally parallel" addition scheme. Because [\Gamma12]
are always 1 the results of Remark 2 can also be included in the derivation. Thus,
The second step of the "totally parallel" addition is the computation of z
In this case - z
and the second step can be described by
the
Table
VI. Following the same reasoning applied previously for the computation of
this step can be implemented by:
Theorem 2: Assuming radix-2 SD operand representation and the SD codification in
Table
IV the addition of two n-SD numbers can be computed by an explicit depth-2 LTN
with weight value of 10 and a maximum fan-in of 14.
Proof: By proper substitutions, using the Equations (44,45,46,47), the Equations (48,49)
provide an explicit depth-2 implementation scheme of the addition as follow:
ni
June 23, 1999
VI
z i as Functions of - z
\Gamma2
\Gamma2[\Gamma6]
\Gamma2[\Gamma10]
On the first level we compute for each digit position i, the values
digit. On the second level we need 2 TGs for each digit position i,
order to compute d
i as stated by the Equations (50,51). For the digit position
we have to produce the carry-out. This can be also explicitly generated in depth-2
at the expanse of two TGs computing:
Therefore the cost of the entire addition network is 12n 2. The maximum fan-in is 14
and the maximum weight value is 10.
One can observe that all the quantities involved in Theorem 2 are in the same order of
magnitude as in Theorem 1. Even though the scheme in Theorem 1 requires slightly larger
maximum fan-in (14 instead of 12) and weight values (10 instead of 6) it has the advantage
of explicitly computing the sum digits after the delay of 2 TGs.
IV. Signed Digit Multi-Operand Addition and Multiplication
Threshold networks for multi-operand addition and multiplication of n-bit binary operands
have been reported [14], [15], [26], [29]. Generally speaking multi-operand addition and
multiplication can be achieved in two steps, namely: First reduce a multi-operand addition
(in multiplication such addition is required for the reduction of the partial product
into two rows; Second add the two rows to produce the final result. In addition to
these two steps the multiplication also requires a third step, the production of the partial
product matrix. In this section we investigate these processes. For such a scheme and
non-redundant representations the following has been suggested:
ffl The reduction of the multi-operand addition (or the reduction of multiplication partial
product matrix) into two rows can be achieved by depth-2 networks with the cost of the
network, in terms of LTGs, in the order of O(n 2 ) and a maximum fan-in in the order of
O(n log n), see for example [15], [29].
ffl The entire multiplication can be implemented by a depth-4 network [14].
It was also suggested in [30], based on a result in [31], that multi-operand addition
can be computed in depth 2 and multiplication in depth 3 but no explicit construction
for the networks and no complexity bounds are provided. A constructive approach
can be derived if the result in [32] suggesting that a single threshold gate computing
with arbitrary weights can be simulated by an explicit
polynomial-size depth-2 network is used. Such a LOGSPACE-uniform construction
as stated in [32] produces a network with O(log 12 W (n)) wires and the weights of those
wires in order of O(log 8 W (n)), for a total size of O(n 20 log 20 n). The total size for such a
construction was further reduced to O(n 12 log 12 n) in [33]. LOGSPACE-uniform constructions
for depth 2 multi-operand addition and depth 3 multiplication has been suggested
in [32] but the discussion about depth-2 multi-operand addition or depth-3 multiplication
schemes is marginal and no complexity bounds are explicitly given. In an attempt to asses
the complexity of such scheme for multi-operand addition which operates on an n 2 -input
function instead of an n-input function we can use the least expensive scheme in [32] and
estimate that such a depth-2 multi-operand addition or depth-3 multiplication network
may require a total size of O(n 24 log 24 n).
(b) One Step Reduction
(a) Two Step Reduction
Fig. 5. Addition of 8 8-Bit Numbers
In this section we investigate the potential benefit that can be expected by using SD
represented operands in multiplication schemes. First we prove that multi-operand
addition can be achieved by a depth-2 network with O(n 3 ) size, O(n 3 ) weights and O(n 2 )
fan-in complexities. It must be noted that the proposed network performs an n operand to
one result reduction in depth-2 not an n operand to two reduction in depth-2 as previously
proposed schemes [15], [29] do. Subsequently we show that the multiplication (that is the
generation of the partial products and the matrix reduction into one row representing
the product) can be achieved with a depth-3 network with O(n 3 ) size, O(n 3 ) weights and
O(n 2 log n) fan-in complexities.
A. Depth-2 Multi-Operand Addition
It is well known that in order to perform n-bit multi-operand addition first the n rows
(representing the n numbers) are reduced to two then the two rows are added to produce
the final result. This two step process is depicted, for the particular case of 8 8-bit numbers,
in the
Figure
5(a). As indicated in the introduction of the section the first step of multi-
operand addition using not redundant digit representations requires a depth-2 network and
additional depth is required to perform the second step. In the following we will prove
that if we assume SD operands in an appropriate representation radix the multi-operand
addition of n n-SD numbers, and consequently the reduction of the partial product matrix
of the multiplication operation, into one row, can be achieved in one computation step as
June 23, 1999
in
Figure
5(b), requiring a depth-2 network. This is achieved by determining a radix which
allows an n-digit "totally parallel" addition. Avizienis investigated this issue in [1] but
from the dual point of view, by assuming a given radix-r SD representation and determine
the maximum number of digits that can be added in "totally parallel" mode within that
radix-r SD representation. In our investigation the number of digits n is given and a
minimum value for the radix-r must be found to compute n SD addition into a "totally
parallel" mode. We answer to this question in the following lemma.
Lemma 4: The simultaneous addition of n SDs can be done in a "totally parallel" mode
by assuming a representation radix greater or equal with 2n \Gamma 1.
Proof: The simultaneous addition of n SDs can be done in a way similar to the
addition of two digits. That is in order to add the n digits x 1
i in a "totally
parallel" mode we have first to produce an intermediate sum digit u i and a transport digit
t i that satisfy the Equation (54) and also we have to satisfy the constraint indicating that
the subsequent addition in the Equation (55), that gives the value of the sum digit z i in
the position i, can be performed without generating a carry-out. That is:
We have to find the value of the radix r for which the computation in the Equations (54,55)
can be achieved and also the maximum absolute values that we can allow for the intermediate
sum digit u i and the transport digit t i . In order to have consistency we have
to assume that jx j
Therefore, if
mapped in absolute maximal values, the Equations (54,55) become:
From the Equations (56,57) we can derive the following inequalities:
In order to obtain the greatest range for jtj max we have to assume the maximum redundancy
digit set, i.e., jxj for the intermediate sum an absolute maximum value
June 23, 1999
of juj
ri
. This together with the Equation (58) and depending if we assume an
or an even one r e , lead to r e - 2n or r Therefore in order to
perform simultaneous addition of n SDs in a "totally parallel" mode we have to use a
representation radix greater or equal with 2n \Gamma 1.
Assuming a representation radix of 2n \Gamma 1 we introduce the depth-2 multi-operand addition
scheme for n n-SD numbers.
Theorem 3: Assuming representation the multi-operand addition of
(that is the reduction via addition of an n-digit n row matrix to one
row) can be computed by an explicit depth-2 LTN with the size of O(n 3 ). The maximum
weight value is the order of O(n 3 ) and the maximum fan-in value is in the order of O(n 2 ).
Proof: Assume that the n SD numbers we have to add are x
and all the digits x j
can take value within the symmetric
digit set
Given that the radix-(2n \Gamma 1) allows for "totally parallel" addition of n SDs, we can
compute the sum of the n numbers as follows: for each position i produce an intermediate
sum digit u i and a transport digit t i that satisfy
the
sum digit z i in the position i is computed as z generating a carry-out.
If we assume that the greatest absolute values for the input digits, transport digits and
intermediate sum digits are jxj
the sum digit z i will depend only on the values of the digits in the columns i and of the
multi-operand addition matrix and can be computed with the two step approach. With
this scheme the network implementing the multi-operand addition contains one sub-circuit
performing this computation for each digit position i, Obviously the cost
of the entire network is n times the cost of the circuit performing the "totally parallel"
addition of n digits. The delay of the multi-operand addition, the maximum weight and
fan-in values are imposed by their similar values in the circuit performing the "totally
parallel" addition of n digits.
The direct implementation of this two steps computation procedure with the scheme in
the Lemma 1 is not convenient because it will lead to a depth-4 LTN. However, given that
June 23, 1999
any generalized symmetric Boolean function can be implemented with a depth-2 network
we can reduce the depth of the network to 2 if we are able to compute the value of z i with
a symmetric function of 2n input variables, i.e., all the digits in the columns i and
of the multi-operand addition matrix. This can be done by observing the direct link that
exists between the value of z i and the value assumed by the weighted sum - of all the
2n digits x 1
in the columns i and computed as in the
Equation (59).
This link exists as a consequence of the fact that under the maximum value assumptions
we made for the input digits, transport digits and intermediate product digits the radix-
representation of the sum - is the values of t i , z i and t
follow from Equations (54,55). The maximum absolute value that can be assumed by -
can be derived from the Equation (59) under the assumption that all the x j
digits
are 2. This will lead to j-j and to a variation domain for - equal to
Because the digits involved into the computation in the Equation (59) belong to the set D
we need [log bits for their 2's complement codification. Under this codification
each digit x j
i is represented by a
Each of this bits will take part into the computation of - with a weight that
correspond to its position inside the digit and following the 2's complement codification
convention. With this assumption the Equation (59) becomes:
iA
Assuming all of these the product digit z i can be expressed by a function F(-). Obviously,
because of the weighted manner we did the computation of the sum -, the function F is
symmetric in all of the input variables 9 and consequently it can be implemented using the
9 The number of input Boolean variables is given by the product of the number of digits involved into the
computation of z i and the number of bits we need in order to represent a digit in D, i.e.,
method described in the Lemma 1 with a depth-2 LTN.
Because z i can assume any digit value in the set D we need again
its codification. Therefore in order to compute F(-) we have to compute [log (2n \Gamma 1)] +1
symmetric Boolean functions F 1)]. For the implementation of
each symmetric Boolean function F i (-) we need r i LTGs in the first layer of the network,
being the number of intervals in the definition domain where F i assume the value of 1,
and 1 LTG in the second layer. Consequently the computation of the function F(-) can
be done
LTGs. The definition domain for F(-) is given by [\Gamma4n
it F(-) can change its value at most I = 2\Theta4n 2 (n\Gamma1)+1
times. As consequence for each
Boolean function F i (-) the number of intervals r i can not be greater than I. Given that
the changes of the values of F i (-) can appear only in certain fixed positions common for
all of them, we can use the gate sharing concept we introduced in [29]. In this way the
gates associated to the upper limit of the intervals can be shared between the networks implementing
the Boolean functions F i (-). This fact leads to an upper bound of
l 8n 2 (n\Gamma1)+1
for the maximum number of TGs in the first level of the network. The second level of the
network has to contain one gate for each F i (-), i.e., bit position in the 2's complement
representation of z i , then it can be build with gates.
Therefore the network computing the sum digit z i as F(-) can be built with at most
l 8n 2 (n\Gamma1)+1
LTGs. Because we need one such network for each digit
position i and the multi-operand addition matrix has n columns 10 the cost of the entire
multi-operand addition is upper bounded by n
Asymptotically speaking this leads to an implementation of the multi-operand addition
of n n-SD numbers with a depth-2 network having the number of LTGs in the order of
O(n 3 ).
The maximum weight value is upper bounded by the dimension of the definition domain,
consequently it is in the order of O(n 3 ).
If the multi-operand addition matrix is the partial product matrix corresponding to the multiplication of two
n-SD numbers the number of columns is 2n and the cost change as consequence. However this do not change the
asymptotic cost.
The maximum fan-in value is imposed by the gates in the second level of the network
which take as inputs all the bits participating into the computation, i.e., 2n([log(2n \Gamma
some outputs of the gates on the first level. The total number of gates in the
first level of the network is upper bounded by
l 8n 2 (n\Gamma1)+1
and consequently the maximum
fan-in value is in the order of O(n 2 ).
We conclude our investigation on networks for the multiplication of SD operands
by introducing a depth-3 LTN for multiplication which uses the multi-operand addition
scheme we presented in Theorem 3.
B. Depth-3 Multiplication
Multiplication is achieved with the generation and reduction of a partial product ma-
trix. In the previous subsection we have shown that the multi-operand addition (and by
extension the reduction of the multiplication partial product matrix) can be performed in
depth-2 using threshold networks and SD representations. In this section we investigate
the entire multiplication operation including the generation of the partial product matrix.
In the case of non redundant operand representation the generation of the partial product
matrix can be performed at the expanse of n 2 TGs in depth-1 because we need one
AND gate to produce each partial product z This may
not be true for sign digit operands where each partial product z i;j is a SD which has to be
computed as the product of two SDs x i and y j . In essence, even though using
representation the partial product reduction can be achieved by a depth-2 it is not said
that multiplication can be achieved by a depth-3 network.
To achieve a depth-3 multiplication we use Theorem 3 for the reduction of the partial
product matrix and use implicit computations in the network connecting the partial
product production and the first stage of partial product reduction. Given that in order
to use the scheme in Theorem 3 all the partial products z i;j ,
to assume values inside the digit set
we have to restrict the maximum absolute values for the SDs x i and y j to
In the following lemma we assume that the operand digits are represented with the 2's
complement codification discussed in Section II and prove that the entire partial product
matrix can be produced by a depth-2 LTN with polynomially bounded size, weight and
June 23, 1999
fan-in values.
Lemma 5: Assuming two n-SD operands
and jy
the partial product matrix
can be produced by a depth-2 LTN with the size measured
in terms of LTGs in the order of O(n 3 ). The maximum weight value is in the order of
O(n) and the maximum fan-in value is in the order of O(n).
Proof: We assume that all the SDs are represented in the 2's complement notation
by x
). The value of d is imposed
by the maximum absolute value of
we have assumed for the operand digits
and is equal with
log
ii
1. With these assumptions the partial product z i;j
can be expressed as in the following equation:
z
\Gamma2
!/
\Gamma2
On the other hand z i;j is a SD in the set
and can be represented by the ([log(2n\Gamma1)]+1)-tuple (z [log(2n\Gamma1)]
Consequently each bit z r
can be expressed by a symmetric
Boolean function F r (- m ) with the weighted sum -m computed as in Equation (63).
This function can be implemented with a depth-2 network as shown in Lemma 1. By its
construction -m can assume values in the definition domain Consequently
the definition domain for all the F r (- m ) describing the partial product z i;j is given
by definition domain any F r (- m ) can change its value
at most 4(n\Gamma1)+1times. Using the same way of reasoning as in the Theorem 3 an upper
bound of
l 4(n\Gamma1)+1m
can be obtained for the maximum number of TGs in the first level of
the network. The second level of the network has to contain one gate for each F r (- m ),
i.e., bit position in the 2's complement representation of the partial product z i;j , then it
can be build with gates.
Therefore the network computing the partial product z i;j can be built with at most
l 4(n\Gamma1)+1m
Because one such network for each digit pair (i; j),
required the cost of the network producing the entire partial
product matrix is upper bounded by n 2
. This leads to
an implementation cost of the depth-2 network producing the partial product matrix in
the order of O(n 3 ). The maximum weight value is upper bounded by the dimension of
the definition domain for the F r (- m ) functions, i.e., consequently it is
in the order of O(n). The maximum fan-in value is imposed by the gates in the second
level of the network which take as inputs all the bits participating into the computation,
log
ii
2, and some outputs of the gates on the first level. Because we
proved that the total number of gates in the first level of the network is upper bounded
by
l 4(n\Gamma1)+1m
the maximum fan-in value is also in the order of O(n).
By connecting the results for the multi-operand addition and the generation of the partial
product matrix for SD operands we obtain a depth-4 scheme for the multiplication of SD
numbers as stated in the following corollary.
Corollary 1: Assuming representation the multiplication of two n-SD
numbers can be computed by an explicit depth-4 LTN with the size measured in terms of
LTGs in the order of O(n 3 ). The maximum weight value is the order of O(n 3 ) and the
maximum fan-in value is in the order of O(n 2 ).
Proof: Trivial from Lemma 5 and Theorem 3.
The delay of the multiplication network can be still reduced by producing the partial
product matrix using an implicit computation scheme presented in Lemma 3.
Theorem 4: Assuming representation the multiplication of two n-SD
numbers can be computed by an explicit depth-3 LTN with the size in the order of O(n 3 ).
The maximum weight value is the order of O(n 3 ) and the maximum fan-in value is in the
order of O(n 2 log n).
Proof: Trivial. First use the implicit implementation (Lemma 3) in order to produce
the partial products z i;j with the delay of one TG. This derivation will not change the
asymptotic costs we derived in the Lemma 5. Second use the depth-2 multi-operand addition
in Theorem 3 to produce the product. The implicit computation of the partial products
will only increase the fan-in of the gates in the first level of the network performing the
multi-operand addition from 2n([log(2n 1) to at most 2n(4n \Gamma 3)([log(2n
This will change the asymptotic bound for the fan-in from O(n 2 ) to O(n 2 log n). The
asymptotic size of the network and the maximum weight value will remain unchanged.
Consequently this depth-3 scheme has a network size in the order of O(n 3 ) and the maximum
weight value is the order of O(n 3 ).
V. Conclusions
We investigated LTNs for symmetric Boolean functions
addition, and multiplication. We assumed SD number representation and we were mainly
concerned in establishing the limits of the circuit designs using threshold based networks.
We have shown that assuming radix-2 representation the addition of two n-SD numbers
can be computed by an explicit depth-2 LTN with O(n) size and O(1) weight and fan-in
values. If a higher radix of (2n \Gamma 1) is assumed we proved that the multi-operand addition
of n n-SD numbers can be computed by an explicit depth-2 LTN with the size in the order
of O(n 3 ), with the maximum weight value is the order of O(n 3 ) and the maximum fan-in
value is in the order of O(n 2 ). Finally we have shown that the multiplication of two n-SD
numbers can be computed by an explicit depth-3 LTN with the size in the order of O(n 3 ).
The maximum weight value is the order of O(n 3 ) and the maximum fan-in value is in the
order of O(n 2 log n).
--R
"Signed-Digit Number Representations for Fast Parallel Arithmetic,"
"Logic Design of a Redundant Binary Adder,"
"High-Speed VLSI Multiplication Algorithm with a Redundant Binary Addition Tree,"
"Fast Radix-2 Division with Quotient-Digit Prediction,"
"Simple Radix-4 Division with Operands Scaling,"
"High Radix Square Rooting,"
"A Functional MOS Transistor Featuring Gate-Level Weighted Sum and Threshold Operations,"
"Neuron MOS Binary-Logic Integrated Circuits- Part II: Simplifying Techniques of Circuit Configuration and their Practical Applications,"
"A Capacitive Threshold-Logic Gate,"
"A Logical Calculus of the Ideas Immanent in Nervous Activity,"
"How we Know Universals: The Perception of Auditory and Visual Forms,"
"Neural Computation of Arithmetic Functions,"
"Some Notes on Threshold Circuits and Multiplication in Depth 4,"
"Efficient Implementation of a Neural Multiplier,"
"Depth-Size Tradeoffs for Neural Computation,"
Addition and Related Arithmetic Operations with Threshold Logic,"
"ffi-bit Serial Addition with Linear Threshold Gates,"
"A Compact High-Speed (31; 5) Parallel Counter Circuit Based on Capacitive Threshold-Logic Gates,"
"On the Application of the Neuron MOS Transistor Principle for Modern VLSI Design,"
"Periodic Symmetric Functions with Feed-Forward Neural Networks,"
"The Principle of Majority Decision Elements and the Complexity of their Circuits,"
"Linear Input Logic,"
"The Realization of Symmetric Switching Functions with Linear-Input Logical Elements,"
"On Threshold Circuits for Parity,"
"Block Save Addition with Telescopic Sums,"
Computer Arithmetic: Principles
"2j1 Redundant Binary Addition with Threshold Logic,"
"Block Save Addition with Threshold Logic,"
"On optimal depth threshold circuits for multiplication and related problems,"
"Majority gates vs. general weighted threshold gates,"
"Simulating threshold circuits by majority circuits,"
"A note on the simulation of exponential threshold weights,"
--TR | redundant adders;carry-free addition;computer arithmetic;redundant multipliers;neural networks;threshold logic;signed-digit arithmetic;signed-digit number representation |
334346 | Algebraic Foundations and Broadcasting Algorithms for Wormhole-Routed All-Port Tori. | AbstractThe one-to-all broadcast is the most primary collective communication pattern in a multicomputer network. We consider this problem in a wormhole-routed torus which uses the all-port and dimension-ordered routing model. We derive our routing algorithms based on the concept of span of vector spaces in linear algebra. For instance, in a 3D torus, the nodes receiving the broadcast message will be spanned from the source node to a line of nodes, to a plane of nodes, and then to a cube of nodes. Our results require at most $2(k-1)$ steps more than the optimal number of steps for any square $k$-D torus. Existing results, as compared to ours, can only be applied to tori of very restricted dimensions or sizes and either rely on an undesirable non-dimension-ordered routing or require more numbers of steps. | Introduction
One-to-all broadcast is an essential communication
operator in multicomputer networks, which has many
applications, such as algebraic problems, barrier syn-
chronization, parallel graph and matrix algorithms,
cache coherence in distributed-share-memory systems,
and data re-distribution in HPF. Wormhole routing
[1, 4] is characterized with low communication latency
due to its pipelined nature and is quite insensitive
to routing distance in the absence of link contention.
Machines adopting such technology include the Intel
Touchstone DELTA, Intel Paragon, MIT J-machine,
Caltech MOSAIC, nCUBE 3, and Cray T3D and T3E.
In this paper, we study the scheduling of message
distribution for one-to-all broadcast in a wormhole-
This work is supported by the National Science Council of
the Republic of China under Grant # NSC87-2213-E-008-012
and #NSC87-2213-E-008-016.
routed torus, which type of architecture has been
adopted by parallel machines such as Cray T3D and
T3E (3-D tori). The network is assumed to use the
all-port model 1 and the popular dimension-ordered
routing. Following the formulation in many works
[2, 6, 8, 10, 12, 13], this is achieved by constructing
a sequence of steps, where a step consists of a set of
communication paths each indicating a
message delivery. The goal is to minimize the total
number of steps used.
The same problem has been studied in several works.
In [9, 10], schedules following the dimension-ordered
routing are proposed for 2-D 2 n \Theta 2 n and 3-D 2 n \Theta 2 n \Theta z
tori using n and respectively, where
. The numbers of steps used
are at least log 4 5 and log 6 7 times, i.e., about 16% and
8.6% more than, the optimal numbers of steps, respectively
(refer to the lower bounds in Lemma 1). A schedule
using the optimal number of steps is proposed in [8]
for any 2-D torus of size 5 p \Theta 5 p or (2 \Theta 5 p ) \Theta (2 \Theta 5 p ),
where p is any integer. The works in [6, 7] remain op-
timal, but can be applied to any square k-D torus with
nodes on each side. However, the disadvantages
of [6, 7, 8] include : (i) the tori must be square,
(ii) very few network sizes are solvable (e.g., for 2-D
tori, the possible sizes are 5 \Theta 5, 10 \Theta 10, 25 \Theta 25,
50 \Theta 50, 125 \Theta 125, 250 \Theta 250, etc.), and (iii) the routing
is not dimension-ordered (we comment that unfortunately
most current torus machines use dimension-
ordered routing). These drawbacks would greatly limit
the applicability of [6, 7, 8]. Generalization to tori supporting
multi-port capability is shown in [3]; however,
the routing is still non-dimension-ordered. Recently,
these deficiencies were eliminated by [11], which can
1 The reverse of this is the one-port model, in which case the
broadcast problem can be trivially solved by a recursive-doubling
technique.
be applied to 2-D tori of any size using dimension-
ordered routing; at most 2 (resp., 5) communication
steps more than the optimum are required when the
torus is square (resp., non-square). However, it still remains
as an open problem how to extend this scheme
to higher-dimensional tori.
One interesting technique used in [11] is the concept
of diagonal in a 2-D torus. In this paper, we extend
the work of [11] and show how to perform one-to-all
broadcast in a torus of any dimension. For practical
consideration, the routing should still follow the
dimension-ordered restriction. The extension turns out
to need some mathematical foundations when higher-dimensional
tori are considered. Our schemes are based
on the concept of "span of vector spaces" in linear al-
gebra. For instance, in a 3-D torus, the nodes receiving
the broadcast message will be "spanned" from the
source node to a line of nodes, to a plane of nodes,
and then to a cube of nodes. We develop the algebraic
foundations to solve this problem for any k-D torus of
size n on each dimension. Our results require at most
more than any optimal scheduling. Existing
results, as compared to ours, can only be applied
to tori of very restricted dimensions or sizes, and either
rely on an undesirable non-dimension-ordered routing
or require more numbers of steps.
2. Preliminaries
A k-D torus of size n is an undirected graph denoted
as Tn\Theta\Delta\Delta\Delta\Thetan . Each node is denoted as p x1;x2
Each node is of degree
2k. Node p x1 ;x2 has an edge connecting to
along dimension one, an edge to
along dimension two, and so on.
(Hereafter, we will omit saying "mod n" whenever the
context is clear.)
We will map the torus into an Euclidean integer
space Z k . Note that instead of being the normal integer
set, Z is in the domain f0; 1g. A node
in the torus can be regarded as a point
. A vector in Z k is a k-tuple
As a convention, the i-th positive
negative) elementary vector ~e i (resp., ~e \Gammai ) of Z k ,
1::k, is the one with all entries being 0, except the
i-th entry being 1 (resp., \Gamma1). We may write ~e i 1
as
, and similarly
. For instance, ~e
and ~e . The linear combination of vectors
(say a 1 are integers) follows
directly from the typical definition of vector addition,
except that a "mod n" operation is implicitly applied.
In Z k , given a node x, an m-tuple of
vectors and an m-tuple of integers
the span of x by
vectors B and distances N as
a i ~
Note that the above definition is different from the
typical definition of SPAN in linear algebra [5]. The
main purpose here is to identify a portion of the
torus. For instances, the main diagonals of Tn\Thetan
and Tn\Thetan\Thetan can be written as SPAN(p 0;0 ; (~e 1;2 ); (n))
and SPAN(p 0;0;0 ; (~e 1;2;3 ); (n)), respectively; an XY -
plane passing node p 0;0;i in Tn\Thetan\Thetan can be written as
In the one-to-all broadcast problem, a source node
needs to send a message to the rest of the network.
The all-port model will be assumed, in which a node
can simultaneously send and receive messages along all
outgoing and incoming channels. Since the node degree
is 2k, in the best case one may multiply the number of
nodes owning the broadcast message by 2k
each step. This leads to the following lemma.
Lemma 1 In a k-D all-port torus Tn\Theta\Delta\Delta\Delta\Thetan , a lower
bound on the number of steps to achieve one-to-all
broadcast is dlog 2k+1 n k e.
3. Broadcasting in 2-D Tori
In this section, we review the broadcasting scheme
for 2-D tori in [11]. This will be helpful to understand
our algorithms for higher-dimensional tori later. Consider
any 2-D torus Tn\Thetan . As the network is symmetric,
we let, without loss of generality, the source node be
We denote by M the message to be broadcast.
The scheme is derived in two stages.
Stage 1: In this stage, message M will be sent to
the main diagonal, SPAN(p 0;0 ,(~e 1;2 ); (n)). This is
achieved by two parts.
Stage 1.1 (Distribution): In this part, M will be
recursively distributed to one of the nodes in each row.
For easy of representation, let 5t. First, we regard
0;0 as the center of the torus and horizontally and
evenly slice the torus into five strips. In one step, we
let p 0;0 send four copies of M to nodes p 0;\Gamma2t , p \Gammat;\Gammat ,
. The above five nodes are located at the
center rows of the five strips and the routing is clearly
congestion-free, as illustrated in Fig. 1(a). Then, by
regarding these five nodes as the source node of the
five strips, we can recursively send M to more rows.
Fig. 1(b) illustrates the routing in the second step. This
is repeated until the height of each strip reduces to
one or zero. At the end, along each row in the torus,
Figure
1. Stage 1.1 of broadcast in a 2-D torus:
(a) the first step, and (b) the second step.
Figure
2. Stage 2 of broadcast in a 2-D torus:
(a) the first step, and (b) the second step.
Note that for clarity only typical communication
paths are shown.
exactly one node will have received M . Overall, this
stage takes dlog 5 ne steps to complete.
Stage 1.2 (Alignment): For each node p i;j holding
message M , p i;j sends M along the first dimension to
all nodes on the main diagonal will
own M . This requires only one step.
Stage 2: In this stage, the torus is viewed as n diagonals
We evenly partition the torus into 5 strips S
\Gamma2::2 each contains t diagonals. This is illustrated in
Fig. 2(a).
In the first step, M will be sent to 4 other diagonals
. This can be done by having
each node p i;i in L 0 send M to nodes p i\Gamma2t;i , p i;i+t ,
. The communication, as illustrated
in Fig. 2(a), is clearly congestion-free.
In the second step, we can regard the diagonal L it
as the source of strip S i , recursively
perform the above diagonal-to-diagonal message distribution
in S i . (refer to Fig. 2(b)). The recursion
terminates when each strips contains one or zero di-
agonal. This stage takes dlog 5 ne steps to complete.
Hence, broadcast can be done in 2dlog 5 ne
which number of steps is at most 2 steps more than
the lower bound in Lemma 1.
4. Broadcasting in 3-D Tori
Now we develop our algorithm for a Tn\Thetan\Thetan with
any n. Without loss of generality, let p 0;0;0 be the
source node. The basic idea is to distribute the broadcast
message M in three stages: (i) from p 0;0;0 to a
line SPAN(p 0;0;0 ; (~e 1;3 ); (n)), (ii) from the above line
to a plane SPAN(p 0;0;0 ; (~e 1;3 ; ~e 1;2 ); (n; n)), and then
(iii) from the above plane to the whole torus. These
stages will use dlog 7 ne ne ne
steps, respectively. For simplicity, we may use X-, Y -
, and Z-axes to refer to the first, second, and third
dimensions, respectively.
4.1. Stage 1: From the Source Node to a Line
Again, this stage is divided into two parts. In
the first part, M will be distributed to one representative
node on each of the n XY -planes. In
the second part, M will be forwarded to the line
4.1.1 Stage 1.1 (Distribution)
For simplicity, let n be a multiple of 7, 7t. We
view the torus as consisting of the following
planes
We then partition the torus horizontally into 7 cubes
\Gamma3::3, such that the first cube consists of the
first t XY -planes in Eq. (2), the second cube the next
t XY -planes, etc. Let's identify the following nodes:
In one step, node m 0 can forward M to m
congestion. The communication
paths are illustrated in Fig. 3(a).
Now, observe that each m i is located at the central
XY -plane of the cube C \Gamma3::3. So we can regard
as the source of C i and recursively perform
broadcasting in C i , until each cube reduces to only one
Figure
3. Stage 1.1 of broadcast in a 3-D torus:
(a) the first step, and (b) the second step.
or zero XY -plane. The second step is illustrated in
Fig. 3(b).
At this point, we would introduce some notations to
our later presentation.
Consider a k-D torus. A routing matrix
is a matrix with entries \Gamma1; 0, or 1 such
that each row indicates a message delivery; if r
(resp. \Gamma1), the corresponding message will travel along
the positive (resp. negative) direction of dimension j;
if r the message will not travel along dimension
j. A distance matrix is an integer
diagonal matrix (all non-diagonal elements are 0); d i;i
represents the distance to be traveled by the i-th message
described in R along each dimension.
For instance, the six message deliveries in Fig. 3(a)
have three directions and thus can be represented by a
routing matrix:
~e 2;3
~e
In general, the six nodes receiving M are
(note that ff \Sigmai - \Sigma in
deriving ff \Sigmai ). So we can use two distance matrices:
and represent the 6 message deliveries in Fig. 3(a) by
matrix multiplication:
where each row represents one routing path.
4.1.2 Stage 1.2 (Alignment)
The goal is to "align" the nodes receiving M to the line
(n)). This can be done in one step
by having every p x;y;z holding M send to p z;0;z along
Figure
4. Stage 2 of broadcast in a 3-D
torus: (a) viewing the torus from the perspective
partitioning the torus into 7 cubes C
\Gamma3::3. Stage 3 of broadcast in a 3D torus:
(c) viewing the torus from the perspective
(d) partitioning the torus into 7 cubes C
\Gamma3::3.
the path . This is congestion-free
as communications only happen in individual XY -
planes.
4.2. Stage 2: From a Line to a Plane
In this stage, we will view the torus from a different
perspective:
Fig. 4(a) for an illustration. With this view, we
partition the network along the direction ~e 1 into the following
The message M will be sent from the line
lines, each
spanned along the same direction ~e 1;3 but located on a
different plane in Eq. (6). Finally, we will align these
lines to a plane SPAN(p 0;0;0 ; (~e 1;3 ; ~e 1;2 ); (n; n)).
It is easy to send messages from a line of
nodes to another parallel line in one communication
step. For instance, to deliver messages
from line SPAN(p 0;0;0 ; (~e 1;3 ); (n)) to line
simply let each p i;0;i (of
the former line) send to p i+2;3;i+4 (of the latter line).
One can easily generalize this to a line sending to six
other parallel lines in one step.
4.2.1 Stage 2.1 (Distribution)
This stage is based on a recursive structure as follows.
For simplicity, let 7t. We partition the torus into 7
Figure
5. Stage 2 of broadcast in a 3-D torus:
(a) the first step, and (b) the second step.
cubes \Gamma3::3, such that C \Gamma3 consists of the first
t planes in Eq. (6), C \Gamma2 the next t planes, etc. (refer
to Fig. 4(b)).
be the line already
owning M . By having each p i;0;i 2 L 0 send M
to the following six nodes:
we can distribute M to six other lines in one step:
This communication step, as illustrated in Fig. 5(a), is
congestion-free. The resulting line L it is on the central
plane of C i for all To see this, let's prove
the case of L 2t :
which is indeed the central plane of C 2 . The other
cases can be proved similarly. The routing matrix can
be written as:
Using the distance matrices in Eq. (4), the six routing
paths in Fig. 5(a) can be described by the six rows in
Next, we can recursively perform the similar line-
to-line distribution in each C i using L it as the source.
The next step is shown in Fig. 5(b).
4.2.2 Stage 2.2 (Alignment)
From stage 2.1, on each plane
line owns M .
The goal is to "align" these n lines to the plane
n)). This can be done by
having each node 2 L send M along the second dimension
by hops, which is obviously congestion-free.
This will forward M to the lines
lines constitute the
plane Only one step
is used in this stage.
4.3. Stage 3: From a Plane to More Planes
In this stage, we view the torus from another perspective
which is illustrated in Fig. 4(c). With this view, we
partition the torus along the direction ~e 1 into n planes
For simplicity, let 7t. Following the same philosophy
as before, we divide the torus into 7 cubes
\Gamma3::3, such as the first cube consists of the
first t planes, the second cube the next t planes, etc.
This is shown in Fig. 4(d).
The central plane in Eq. (8) already owns message
M . In this stage, plane-to-plane message distribution
will be performed. For instance, if every node on plane
sends M along the Y -
and Z-axes to nodes that are +3 and +5 hops way,
respectively, then two planes will receive
which are \Gamma3 and \Gamma5 planes next to the source plane.
Specifically, we will use the routing matrix:
The distance D + and D \Gamma remain the same. The resulting
6 routing paths can be represented by the 6 rows
in the following matrices:
// the next t planes
// the next 2t planes
// the next 3t planes
// the next \Gammat planes
// the next \Gamma2t planes
// the next \Gamma3t planes
Now we have 7 planes owning M on the centers of the
cubes so the recursion can be proceeded,
until each C i reduces to one or zero plane. Totally
dlog 7 ne steps will be used in this stage.
Theorem 1 In a 3-D Tn\Thetan\Thetan torus with dimension-
ordered routing and all-port capability, broadcast can be
done in 3dlog 7 ne which number of steps is
at most 4 steps more than optimum.
5. Broadcasting in k-D Tori
In this section, we extend our broadcasting algorithm
to a k-D Tn\Theta\Delta\Delta\Delta\Thetan torus. Following the same
philosophy as before, broadcasting in Z k (a k-D torus)
will be achieved by distributing the broadcast message
M in k stages: from the source to a line, from a line to
a plane, from a plane to a cube, from a cube to a 4-D
cube, etc. In the following, we discuss in general how
stage 1::k. Still, this has two parts: distribution
and alignment. Throughout this section, we
0;:::;0 be the source node and N i be a vector
of length i equal to (n;
5.1. Distribution Sub-stage
First, we need to define the set of nodes receiving
the broadcast message after stage i.
Definition 3 The set of nodes receiving message M
after stage i is defined to be:
The number of nodes in U i is n times that of U
Also, we can think of U i as n copies of U expanding
along the vector ~e 1;k\Gammai+1 , i.e.,
where a vector ~v plus a set of nodes S (~v + S) means
a "translation" operation which moves each node of
S to a node relative to the former by a vector of ~v.
The following lemma guarantees that M will really be
received by all nodes after stage k.
The
being selected is to insure the following
two lemmas, which are important later to guarantee
our routing to be congestion-free. The following two
lemmas can be simply proved by the Gaussian Elimination
method.
Lemma 3 For any i and j such that 1 -
the union
of\Omega i and the vector ~e j is
linearly independent.
Lemma 4 For any
of\Omega i and
is linearly independent.
Figure
6. Partitioning V i along ~s i into a "super
linear path".
To expand U i\Gamma1 to U i , we need to view the torus
from a different perspective defined as follows.
Definition 4 In stage i, we view the torus as:
where
~
(Here "\Delta" means the concatenation of two sequences.)
The earlier perspectives Fig. 3(a), Fig. 4(a) and (c)
of a 3-D torus are derived based on this formula. The
above perspective provides us a way to partition the
torus. As we have seen earlier, partitioning is used to
solve our problem in a recursive manner.
Definition 5 In stage i, V i is partitioned along direction
~s i into the following n sub-networks (j =
\Gammab
The partitioning is illustrated in Fig. 6. Examples of
such partitioning of a 3-D torus at different stages can
be seen in Fig. 3(b), Fig. 4(b) and (d). We can imagine
j as a "super-node" which is connected through a
vector ~s i to two "super-nodes" W i
j+1 in a
wrap-around manner. So these super-nodes actually
form a "super linear path" of length n. Also note that
U (which already has the message M) is resident
in the central super-node W i
0 . In the distribution sub-
stage, we try to send M from W i
0 to the other
1 super-nodes; the sets of nodes receiving M in each
super-node will have a shape "isomorphic" to U i\Gamma1 .
This is done based on the following recursive structure:
(i) partition the linear path into 2k
send M to a representative super-node in each segment,
and then (iii) perform the distribution in each segment
recursively.
Given a segment of super-nodes of length m, we use
a routing matrix and two distance matrices to describe
the routing in one recursive step.
Definition 6 In stage i, the routing matrix is a k \Theta k
matrix defined as:
~e 1;k
~e 2;k
~e k\Gamma1;k
~e 2;\Gammak
~e k\Gammai+1;\Gammak
\Gamma~e k\Gammai+2
I k\Gammai+1
Definition 7 In stage i, given a segment of super-nodes
of length m, the distance matrices with respect
to m are (intuitively, ff \Sigmai - \Sigma im
are derived in [14]):
ff \Gamma2 ~e 2
\Gammam
Using these matrices, the intuitive meanings of routing
paths in one recursive step can be described by the 2k
rows through the following matrix multiplication:
m \Theta R i
routing paths to W i
routing paths to W i
routing paths to W i
routing paths to W i
m \Theta R i
routing paths to W i
routing paths to W i
ff \Gamma2 (ff \Gamma2 - \Gamma2m
routing paths to W i
ff \Gamma3
routing paths to W i
ff \Gammak (ff \Gammak - \Gammakm
Consider the first communication step where U
(resident in W i
following the second row of
(which is equal to ff 2 ~e 2;\Gammak ). Then, the message
M will be sent to:
The meanings of routing paths in the other rows can
be proved similarly, so we omit the details.
Figure
7. Aligning S to S 0 in W i
d .
Lemma 5 [14]In the distribution sub-stage of stage i,
each communication step is congestion-free.
5.2. Alignment Sub-stage
Now each super-node W i
already has a
set of nodes (of shape isomorphic to U
The next job is to align these n sets to form U i . In the
following, we show the routing in W i
d . Suppose the set
of nodes owning M after the distribution sub-stage in
d is S. As S is isomorphic to U i\Gamma1 , we can write S
as (v 1 Also, recall that U i consists
of n copies U spanned along direction
~e 1;k\Gammai+1 . So the set of nodes that we expect to own
M in W i
d should be S (see the
illustration in Fig. 7).
is no need of alignment. Otherwise, some alignment
may be necessary. Intuitively, if we take a difference of
these two vectors:
then the resulting vector can be used to represent the
routing paths leading S to S 0 . However, such a routing
may not be congestion-free. The following lemma
shows that S in fact can be rewritten in a different
form.
Lemma 6
d) U
where means a "don't care''.
Using the new form in Lemma 6, we perform the
following subtractions:
The resulting vectors indicate how to align S to S 0 .
the alignment may go along dimensions
(by observing the locations where 's
appear in Eq. (12)). As ~e 1 the routing
only happens inside W 1
d . Thus, we only need to
prove that the routing for the alignment inside individual
d 's is congestion-free. The proof is similar to
that of Lemma 5, so we omit the details. Similarly,
the alignment may go along dimensions
(by observing the locations where 's
appear in Eq. (13)). As ~e , the
routing only happens inside W i
d .
Also note that when reduces to a zero
vector, which means no need of alignment sub-stage in
stage k. This leads to the following theorem.
Theorem 2 In a k-D Tn\Theta\Delta\Delta\Delta\Thetan torus with dimension-
ordered routing and all-port capability, broadcast can
be done in kdlog 2k+1 ne which number of
steps is at most 2(k \Gamma 1) steps more than optimum.
6. Conclusions
We compare our scheme for 3-D tori against two
other known schemes: (i) Tsai and McKinley [10],
which works for T 2 d \Theta2 d \Thetaz and requires d+1 or d+m+2
steps when 2 - z - 7 or 7 \Theta 6 m
respectively, and (ii) Park et al. [6, 7], which works for
steps. In Fig. 8, we draw the
numbers of communication steps required by these and
our schemes in a Tn\Thetan\Thetan torus.
We make the following observations. First, in terms
of the numbers of steps uses, the Park scheme is the
best and always coincides with the lower bound; the
TM scheme is better than ours when n is small, but is
outperformed by ours as n becomes larger. Second, in
terms of network sizes allowed, ours has the broadest
applicability because any n is allowed; the TM and the
Park schemes have quite limited applicability, and the
situation is getting worse especially when n becomes
larger. Third, in terms of communication capability,
both TM and our schemes assume a dimension-ordered
routing, while the Park scheme assumes a stronger non-
dimension-ordered routing.
We are currently trying to extend our result to other
port models such as [3].
--R
The torus routing chip.
Optimal broadcasting in all-port wormhole-routed hypercubes
Optimal broadcast in ff-port wormhole-routed mesh networks
A survey of wormhole routing techniques in directed network.
Linear Algebra with Applications.
A broadcasting algorithm for all-port wormhole-routed torus networks
A broadcasting algorithm for all-port wormhole-routed torus networks
A dilated-diagonal-based scheme for broadcast in a wormhole-routed 2d torus
Algebraic foundations and broadcasting algorithms for wormhole-routed tori
--TR
--CTR
Xiaotong Zhuang , Vincenzo Liberatore, A Recursion-Based Broadcast Paradigm in Wormhole Routed Networks, IEEE Transactions on Parallel and Distributed Systems, v.16 n.11, p.1034-1052, November 2005
Olivier Beaumont , Arnaud Legrand , Loris Marchal , Yves Robert, Pipelining Broadcasts on Heterogeneous Platforms, IEEE Transactions on Parallel and Distributed Systems, v.16 n.4, p.300-313, April 2005
Yuanyuan Yang, A New Conference Network for Group Communication, IEEE Transactions on Computers, v.51 n.9, p.995-1010, September 2002
Yuh-Shyan Chen , Chao-Yu Chiang , Che-Yi Chen, Multi-node broadcasting in all-ported 3-D wormhole-routed torus using an aggregation-then-distribution strategy, Journal of Systems Architecture: the EUROMICRO Journal, v.50 n.9, p.575-589, September 2004
San-Yuan Wang , Yu-Chee Tseng , Sze-Yao Ni , Jang-Ping Sheu, Circuit-Switched Broadcasting in Multi-Port Multi-Dimensional Torus Networks, The Journal of Supercomputing, v.20 n.3, p.217-241, November 2001 | torus;wormhole routing;interconnection network;one-to-all broadcast;parallel processing;collective communication |
334347 | Lower Bounds on Communication Loads and Optimal Placements in Torus Networks. | AbstractFully populated torus-connected networks, where every node has a processor attached, do not scale well since load on edges increases superlinearly with network size under heavy communication, resulting in a degradation in network throughput. In a partially populated network, processors occupy a subset of available nodes and a routing algorithm is specified among the processors placed. Analogous to multistage networks, it is desirable to have the total number of messages being routed through a particular edge in toroidal networks increase at most linearly with the size of the placement. To this end, we consider placements of processors which are described by a given placement algorithm parameterized by $k$ and $d$: We show formally, that to achieve linear communication load in a $d$-dimensional $k$-torus, the number of processors in the placement must be equal to $c k^{d-1}$ for some constant $c$. Our approach also gives a tighter lower bound than existing bounds for the maximum load of a placement for arbitrary number of dimensions for placements with sufficient symmetries. Based on these results, we give optimal placements and corresponding routing algorithms achieving linear communication load in tori with arbitrary number of dimensions. | Introduction
Meshes and torus based interconnection networks have been utilized extensively in the design
of parallel computers in recent years [5]. This is mainly due to the fact that these
families of networks have topologies which reflect the communication pattern of a wide
An extended abstract was presented in the IEEE Symposium IPPS/SPDP 1998, April 1998, Orlando.
y Supported in part by a fellowship from - Izmir Institute of Technology, -
Izmir, Turkey.
variety of natural problems, and at the same time they are scalable, and highly suitable
for hardware implementation. An important factor determining the efficiency of a parallel
algorithm on a network is the efficiency of communication itself among processors. The
network should be able to handle "large" number of messages without exhibiting degradation
in performance. Throughput, the maximum amount of traffic which can be handled by
the network, is an important measure of network performance [3]. The throughput of an
interconnection network is in turn bounded by its bisection width, the minimum number of
edges that must be removed in order to split the network into two parts each with about
equal number of processors [8].
Here, following Blaum, Bruck, Pifarr'e, and Sanz [3, 4], we consider the behavior of torus
networks with bidirectional links under heavy communication load. We assume that the
communication latency is kept minimum by routing the messages through only shortest
(minimal length) paths. In particular, we are interested in the scenario where every processor
in the network is sending a message to every other processor (also known as complete
exchange or all-to-all personalized communication). This type of communication pattern
is central to numerous parallel algorithms such as matrix transposition, fast Fourier trans-
distributed table-lookup, etc. [6], and central to efficient implementation of high-level
computing models such as the PRAM and Bulk-Synchronous Parallel (BSP). In Valiant's
BSP-model for parallel computation [14] for example, routing of h-relations, in which every
processor in the network is the source and destination of at most h packets, forms the main
communication primitive. Complete-exchange scenario that we investigate in this paper
has been studied and shown to be useful for efficient routing of both random and arbitrary
h-relations [7, 12, 13].
The network of d-dimensional k-torus is modeled as a directed graph where each node
represents either a router or a processor-router pair, depending on whether or not a processor
is attached at this node, and each edge represents a communication link between
two adjacent nodes. Hence, every node in the network is capable of message routing, i.e.
directly receiving from and sending to its neighboring nodes.
A fully-populated d-dimensional k-torus where each node has a processor attached,
contains k d processors. Its bisection width is 4k which gives k d =2 processors
on each component of the bisection. Under the complete-exchange scenario, the number
of messages passing through the bisection in both directions is 2(k d =2)(k d =2). Dividing by
the bisection bandwidth, we find that there must exist an edge in the bisection with a load
This means that unlike multistage networks, the maximum load on a link is
not linear in the number of processors injecting messages into the network. To alleviate
this problem, Blaum et al. [3, 4] have proposed partially-populated tori . In this model, the
underlying network is torodial, but the nodes do not all inject messages into the network.
We think of the processors as attached to a (relatively small) subset of nodes (called a
placement), while the other nodes are left as routing nodes. This is similar to the case of a
multistage network: A multistage network with k \Theta k switches (routing nodes) and log k n
stages serves n injection points, and utilizes n log k n routing nodes [3].
In partially-populated tori, a routing algorithm which utilizes shortest paths is specified
together with the placement. An optimal placement is a placement that achieves linear load
on edges using maximum number of processors possible.
The notion of resource placement in general has been investigated by a number of researchers
such as Bose et al. [5], Alverson et al. [1], F. Pitteli and D. Smitley [11]. Our
aim is to give placements and routing algorithms which will enable efficient communication
between processors, and at the same time reduce the susceptibility of the network to
link faults by reducing the number of messages relying upon a particular edge [3]. This
is achieved by providing routing algorithms in which the number of minimal paths specified
between pairs of processors in the placement is kept large, without compromising the
linearity of load.
denote the maximum load over all the edges for the placement P . Blaum et
al. give the lower bound
which means that for constrained
to be of the form k i , then they also give placements of sizes k for
together with routing algorithms. These placements are optimal in the sense that the two
lower bounds are actually achieved by the placements.
How do we justify that in general a maximal size placement that can achieve linear load
is O(k d\Gamma1 )? If the placement has size ck d\Gamma1 for some constant c, then mimicking the case
of the fully-populated d-dimensional k-torus,
2(ck
This seems to imply that linear load is at least possible for jP ck d\Gamma1 . This is a faulty
argument however, as we do not know a priori that number of edges needed to split P
into two equal size pieces is the same as the bisection width of the whole torus. This may
push the size of an optimal placement above or below k d\Gamma1 . In this paper, we introduce
the concept of bisection width with respect to a placement P , and use its properties to
prove that in a d-dimensional k-torus, the size of an optimal placement is \Theta(k Given
a placement P of maximal size, we also prove that there exists an edge separator of size
which splits the torus into two components with \Theta(k processors of P on each
side. This gives a lower bound of the form
ck
for maximum load. In (2) c is a constant independent of d. This is a tighter lower bound
for the load for large d than the lower bound (1).
Finally, we give optimal placements (called linear placements ) achieving the lower
bound (2) and corresponding routing algorithms (Ordered Dimensional Routing (ODR)
and Unordered Dimensional Routing (UDR)) in tori with arbitrary number of dimensions.
Of the two routing algorithms ODR is simpler, but UDR provides fault tolerance by allowing
more routes. We also show how tho extend these to more general placements in tori that
we refer to as multiple linear placements.
The outline of the paper is as follows. Section 2 gives necessary definitions and the
formal statement of the problem. In section 3, a lower bound on the maximum load on an
edge is given, which is also a generalization of the lower bound given by [3]. This bound,
along with the notion of bisection width with respect to a placement, is used to get an
upper bound on the number of processors in an optimal placement. We introduce the
notion of ff\Gammaseparator with respect to a placement in section 4, and use it to give a new
lower bound on the maximum load which is independent of the dimension parameter in
section 5. Finally, in sections 6-8, we define and analyze an important class of placements
called linear placements, and give associated routing algorithms which achieve linear load
and fault tolerance. Section 9 includes conclusions and some future considerations.
Preliminaries and Problem Definition
In this section, we start out with the problem definition, and follow it by a sequence of
formal definitions and terminology that will be used in the rest of the paper.
Problem Definition
Our aim is to find placements and associated routing algorithms in the d-dimensional k-torus
k that have linear message load (in number of processors in the placement) on edges
under the complete exchange scenario. Specifically, we like to devise a placement P , and a
routing algorithm A for P for which
The d-dimensional k-torus is a directed graph T d
E), with vertex set
where ZZ k denotes the integers modulo k, and edge set
9j such that a
k has a total of k d nodes. Each node has two neighbors in each dimension, for a total
of 2d neighbors. Directed edges of T d
are also referred to as links.
placement P of processors in T d
E) is a subset of V .
We use the term node for a generic element of the vertex set of T d
k . A node with a processor
attached is simply called a processor.
fRouting Algorithmg
Let P be a placement in T d
k . A routing algorithm A is a subset C A
~ p!~q of the set of all shortest
paths between ~p and ~q for every pair ~ p , ~q 2 P (See Figure 1).
The routing algorithm A is used to deliver packets from ~ p to ~q : When ~
p needs to
communicate with ~q , a shortest path in C A
~ p!~q is selected randomly with uniform probability.
For any link l, we denote the set of paths in C A
~
p!~q going through l by C A
~ p!l!~q , and use
the following definition of load as given in [3].
Given a placement P in a T d
k along with a routing algorithm A, the load of an edge l is
defined as
~
jC A
jC A
The maximum value of E(l) for a network with placement P and a routing algorithm A is
called the maximum load and denoted by E max . Thus
Considering the expression (3) for E(l), the more paths the routing algorithm provides
between any two processors, the smaller the load on any edge that is used to route messages
between these processors. In addition to this, availability of a large number of choices means
better fault tolerance.
We shall consider algorithms which use minimal (shortest) paths. Minimal paths are
associated with the notion of cyclic distance and Lee distance which we define next.
Definition 6 fCyclic Distance, Lee Distanceg
Given three integers, i, j and k, the cyclic distance between i and j modulo k is given by
where the equivalence classes modulo k are taken to be 0; 1. The Lee distance
between two nodes ~
k is the sum of the cyclic distances between the coordinates of ~
and ~q.
The Lee distance between ~
k is the length of a shortest path between ~p and ~q on
the torus [5, 9].
Definition 7 fBisection Widthg
The bisection width of a graph is the minimum number of edges which must be removed in
order to split the node set into two parts of equal (within one) cardinality.
Definition 8 fBisection Width with respect to a Placementg
The bisection width with respect to a placement P of T d
E) is the minimum number
of edges which must be removed from E in order to split V into two parts each of which
containing an equal (within one) number of processors in P .
We denote by @ b P a minimal cardinality set of edges of T d
which needs to be removed
to bisect P . Thus j@ b P j is the bisection width with respect to the placement P .
Definition 9 fff-separator Width with respect to a Placementg
An ff-separator with respect to a placement P in T d
k is a set of edges whose removal splits
the graph into two parts containing (approximately) ffjP j and (1 \Gamma ff)jP j processors, for
1. The ff-separator width of T d
k with respect to a placement P is the size of a
minimal ff-separator with respect to P .
We denote the set of edges in an ff-separator by @ ff P . Thus j@ ff P j is the ff-separator
width of T d
k with respect to P . Note that when are equivalent.
We use the notation ck
for some constant c ? 0 whenever k - k 0 .
ck for infinitely many values of k.
loosely use
The problem we are interested in is the construction of placements P and associated
routing algorithms in T d
k such that under the complete exchange scenario,
for some constant c.
For et al. [3, 4] have investigated placements with k d\Gamma1 processors.
Evidently, placements with provably maximum possible number of processors are desirable.
This raises another important question which we shall address: what is the maximum
number of processors a placement could have on T d
k without compromising linear load on
Another important issue is fault tolerance. Specifically, the routing algorithm should
provide multiple routing paths between each pair of processors so that, if any of the links
fails, the network will remain functional by routing the messages through paths which do
not include the defective link. Consequently we also address the following problem: is it
possible to construct optimal placements which are at the same time fault tolerant?
In the following sections, we analyze lower bounds for maximum load and study the
above questions.
Figure
1: A placement of 3 processors on T 2
3 . Among the links, the ones on specified
shortest paths between the processors are highlighted.
3 A General Lower Bound for Maximum Load
We start out with an important lemma which will prove to be a very useful tool in the
subsequent sections. The lower bound for maximum load originally given by Blaum et al.
[3] is
2d (4)
The following lemma gives a more general form of (4).
be a placement in a T d
and @S be the set of all
edges each connecting a node in S with another node not in S. Then
Proof The total number of messages exchanged between processors in S and processors
in either direction, is personalized communication
scenario. Also, these messages must go through one of the edges in @S. The average
number of messages going through an edge in @S is and the lemma
It is easy to see that (5) reduces to (4) if the set S is taken to contain only one processor,
4d. The lower bound (5) is valid independent of the routing
algorithm used. Another interesting form of (5) that we shall subsequently make use of is
obtained when the set S consists of half of the processors in P , i.e.,
Note that in this case @S becomes @ b P , which is the bisection width of T d
k with respect
to placement P . Next we give an upper bound on the size of @ b P , which we then use to
calculate the maximum number of processors an optimal placement can contain.
Proposition 1 Any subset P of nodes of the d-dimensional k-torus can be bisected by removing
k . In particular, any subgraph of T d
k has bisection width O(k
Proof Omitted. See Appendix I. 2
The constant in O(k d\Gamma1 ) of Proposition 1 is no larger than 6d when we consider directed
edges. As a particular case of the proposition we view a placement P as a subgraph of T d
with no edges. Then
Corollary 1 The d-dimensional k-torus T d
k has bisection width of at most 6dk d\Gamma1 with
respect to any placement P , i.e. j@ b
Remark
Although the assertion of Proposition 1 appears intuitive because of its geometric nature
for it is easy to come up with examples of general graphs for which the
bisection width of subgraphs can become arbitrarily far from that of the original graph.
As an example, two copies of the complete graph K 2n on 2n nodes joined by a single edge
has bisection width 1. Its subgraphs with 2n nodes have bisection widths ranging from 1
to depending on how evenly the 2n nodes are distributed among the two copies of
K 2n .
Maximum Placement Size
An upper bound for the maximum number of processors an optimal placement can contain
can now be obtained by substituting the bound for j@ b P j given in the corollary into the
inequality (6), while at the same time insuring that E i.e. the load remains
linear in the number of processors in the placement.
ck
for That is, the size of an optimal placement
in T d
k is O(k Thus, we are justified in seeking placements which have ck
for some constant c. However, for ease of exposition, we will give analyses of placements of
size k d\Gamma1 first (i.e. 1). Subsequently, we also consider the construction of placements
with c ? 1.
4 ff-Separator Width with respect to a Placement
From corollary 1, we know that the bisection width of T d
k with respect to a placement P is
no larger than 6dk d\Gamma1 . The lower bound on maximum load that one can obtain using this
result is a function of dimension d, however. In this section we show that given a placement
on T d
k with jP possible to divide the network into two parts each having
processors by removing O(k d\Gamma1 ) edges, where the constants involved in \Theta(jP j) and
are independent of d. This will be useful in obtaining a better lower bound for
load on edges than (4). We give a proof of this result in Theorem 1, assuming that the
number of processors placed in subtori of T d
k are given by "reasonable" functions of k, such
as polynomials. Theorem 1 holds in general without this assumption, see Appendix II.
Theorem 1 Relative to a placement P of size \Theta(k
k has an edge separator @ ff P of
size O(k d\Gamma1 ) which splits T d
k into two parts each with \Theta(k processors (specifically, ffk
processors, respectively, for some ff, 0 ! ff ! 1), where the constants are
independent of d.
Proof
Let us fix a dimension. There are k copies of a lower dimensional torus T
embedded
along this dimension, indexed from 0 through k \Gamma 1. Let a j (k) be the number of processors
P has in the j th subtorus, 1. There are two cases to consider depending on
whether or not a i First we assume that there
is such an a i (k). The situation is especially simple if there are two indices
a we remove the edges between subtori i and
separating T d
k into two parts each with
processors. The total number of edges removed is \Theta(k
If there is only a single index i for which a i
an argument as above can again be used to split P by removing 4k d\Gamma1 edges. We can thus
assume that P
time we cannot separate T d
k using the argument
above however. Now let S
k be the subtorus for which a i
k can be bisected relative to the processors it contains by removing no more than
6dk d\Gamma2 edges. Therefore T d
k has a bisection relative to P obtained by removing at most
6dk d\Gamma2 +4d
edges. This is because there are a total of 4d
incident
on the processors in all the subtori excluding S
k . Therefore
a
whenever d ! k. Thus, it is possible to split the T d
k into two parts, each containing \Theta(k
processors by removing no more than O(k (actually o(k edges.
Now consider the case that no a i (k) is \Omega\Gamma k
at the same time
We outline the proof here for the case when the
a i (k) are polynomial functions of k (It follows from Proposition 2 in Appendix II that this
assumption is not necessary). Let 1g. Note that
must be \Omega\Gamma k), since otherwise jP j could not be \Theta(k are polynomial
functions and all a i whose indices are in U must each have \Theta(k d\Gamma2 )
processors. Furthermore this forces jU Thus we can find aek subtori that have
1. Consequently we can split T d
k into two parts each having
ae
processors by removing O(k
5 An Improved Lower Bound for Maximum Load
We have shown in the previous section that given a placement P of size c 1 k d\Gamma1 , it is possible
to split T d
k into two parts having ffjP j and (1 \Gamma ff)jP j processors by removing at most c 2 k
edges, for some constants c 1 , c 2 , and ff, We use this result to establish a lower
bound on load which shows that the lower bound E max - (jP
as the parameter d grows larger.
Taking in (5), we have
ck
. Note that the constant c is independent of parameter d. Hence,
this lower bound comes to characterize the quantity E max more closely than (4) as the
parameter d grows. We will use this lower bound to gauge the optimality of the placements
and routing algorithms that we give next.
6 Linear Placements
We have established in section 3 that optimal placements have \Theta(k processors. In this
section, we introduce the notion of a linear placement: these are placements in which the
coordinates of each processors in P satisfy a particular type of a linear equation over ZZ k .
Definition 11 A placement P on T d
which satisfies
, and at least one of c i 2 ZZ k is relatively prime to k is called a linear
placement.
For simplicity, we will use placements where c even though our
analyses apply to linear placements in general form (i.e. (7)) provided that c 1 and c d are
relatively prime to k. Note that there are exactly k d\Gamma1 processors satisfying the expression
specific c 2 ZZ k . Originally, linear placements of this
form were used in three dimensional tori by Blaum et al. [3, 4], where they were called
shifted diagonal placements.
We can also specify placements of size tk d\Gamma1 where t is a fixed integer less than k. For
instance, the placement
has tk d\Gamma1 processors. We shall call such placements multiple linear placements.
Remark
We would like to point out that linear (and multiple linear) placements themselves do not
guarantee the linearity of the load on edges. Linear only refers to the fact that the coordinates
of the processors in the placement satisfy a linear equation over ZZ k . We still need
to construct routing algorithms which enable communication between pairs of processors
in a way that yields load that is linear in jP j.
In the remaining sections, we will specify different routing algorithms and analyze their
maximum communication load on edges. As we have mentioned earlier, the routing algorithms
will use minimal (shortest) paths between processors. To deliver a message from
processor ~p to ~q , the value of ~p in each dimension is "corrected" towards the corresponding
value in ~q by the amount and direction (\Sigma) dictated by the shortest cyclic distance
between the values in that dimension. The exact way of correcting the dimensions to route
the packets is specified by the routing algorithm.
We consider two classes of routing algorithms and the analysis of the load in each case
both for linear and multiple linear placements: Ordered Dimensional Routing (ODR) and
Unordered Dimensional Routing (UDR).
7 The Ordered Dimensional Routing Algorithm (ODR)
The algorithm is simple. Given a placement P on T d
k to route a packet from
to
for to d do
in the direction of shortest cyclic distance
That is, the routing path will include the following nodes:
Note that if k is odd, jC ODR
there is only 1 path specified by the ODR algorithm
for any given ~p and ~q 2 P . However, when k is even the ODR algorithm may
result in multiple paths between some pairs of processors in the placement. To aid in the
analysis, we will use the following (restricted) version which ensures the existence of only
one canonical routing path between any given pair of processors regardless of the parity of k.
for to d do
begin
if there is more than one way of correcting p i then
Pick the path that corrects p i in the (+) direction (mod k);
in the direction of shortest cyclic distance
Thus if there are two choices for some coordinate pair, the algorithm routes through
. The shortcoming of having only one path between
a pair of processors is the lack of fault-tolerance in the network. Specifically, if an edge
over which a pair of processors communicate fails then the pair will no longer be able to
exchange messages. In section 8 we look at another routing algorithm which does not suffer
from this limitation.
7.1 Load Analysis for Linear Placements with ODR
Theorem 2 Given a linear placement
Ordered Dimensional Routing Algorithm results in linear load on edges.
Proof Since ODR algorithm ensures one path between each pair of processors, each
denominator in the expression (3) for E(l) is 1. Thus, in order to compute E max , we need
only to count the (maximum) number of pairs of processors that communicate through a
specific edge. Without loss of generality, consider an edge l 2 E, where
We will count pairs of processors which communicate using l. Let ~ p and ~q 2 P be two
processors where ~p sends messages to ~q through l. Since ODR algorithm is used, we must
have
and
with and ~q are both in P and P is linear, the coordinates
and
Therefore
Note that if ~ p and ~q are to use the edge l then, in order to ensure that messages follow
shortest paths, we must also have,
by the property of cyclic distance. The number of processors satisfying equation (8) is less
than or equal to k s\Gamma1 while those satisfying (9) is less than or equal to k d\Gammas . Thus, the total
number of processor pairs cannot be more than k
and the maximum load is linear in jP
Note that we are actually overcounting since we have not taken the restrictions p s - i s ,
on the s-th dimension into account when
we count the solutions of (8) and (9). These conditions affect the choices of p s and q s .
A more accurate expression (though of the same order) can be obtained by paying closer
attention to these parameters: To determine the number of different ways p s and q s may
be chosen, consider the 1-dimensional k-subtorus (ring) on which the edge l lies. Assume
first that k is even. Without loss of generality, also assume that the nodes in the ring are
enumerated from 0 to 1. Then, the ODR algorithm will use l to
deliver messages from node 0 to only node k=2 on this ring. Similarly, it will use edge l for
messages from node 1 to node k=2 and from node 1 to node k=2
from node k=2 can be sent using l to any node indexed k=2 to k \Gamma 1. The total
number of choices for p s and q s will therefore be
Now assume that k is odd, and also that i 1)=2. In this case, messages from
node 1 can be delivered to only node (k while messages from node 2 can
be routed to nodes on. Thus there are a total of
k\Gamma1( k\Gamma1+ 1)choices for p s and q s when k is odd.
Therefore the number of solutions to equations (8) and (9) which satisfy the conditions
d\Gamma2when k is even, and
Therefore regardless of the parity of k, for a linear placement P with ODR, jP
7.2 Multiple Linear Placements with ODR
Theorem 3 Multiple linear placements along with ODR algorithm on T d
k results in linear
load on edges.
Proof The analysis is conceptually similar to that of the previous section. Consider a
multiple linear placement
E) with ODR where for some
fixed constant
Note that jP As before, consider an edge l 2 E of the form
and a pair of processors ~ p , ~q 2 P , which communicate using l. Since ODR algorithm is
used, we must have
and
with and ~q are
both in P , each must satisfy an equation among
and
The number of solutions to equations (10) is no more than tk . Similarly, the number
of solutions to equations (11) is no more than tk d\Gammas . Therefore, the total number of
processor pairs communicating through l is bounded by t 2 k d\Gamma1 , which is linear in jP j for
constant t. 2
8 Unordered Dimensional Routing (UDR)
We mentioned in section 7 that ODR algorithm suffers from lack of fault-tolerance, since
there is only one path between each pair of processors. In this section, we introduce Un-ordered
Dimensional Routing (UDR), which eliminates this problem. The algorithm is as
follows: To route a packet from ~
for to d do
begin
Select a number j from the set f1; dg that has not been used before;
in the direction of shortest cyclic distance
As was the case in ODR, a dimension is corrected completely before another is selected.
Unlike ODR, however, the order in which the dimension to be corrected next is picked
is arbitrary. This algorithm thus provides multiple paths for each pair of processors and
improves the fault-tolerance of the system. If ~p and ~q are two processors differing in s
dimensions, then there will be s! different paths from ~p to ~q in UDR, i.e. jC UDR
we show that UDR algorithm results in linear load in edges.
8.1 Load Analysis for Linear Placements with UDR
For a linear placement P which uses UDR algorithm, the load on an edge l is
~ p2P;~q2P
Since there exist some pairs of processors for which jC UDR
~ p2P;~q2P
The upper bound on the right hand side of this inequality specifies the number of messages
sent between pairs of processors which could "potentially" route their messages through
l. Such processors can also use other paths that do not include l, since UDR algorithm
provides multiple routing paths.
Theorem 4 Given a linear placement
Unordered Dimensional Routing Algorithm results in linear load on edges.
Proof Without loss of generality l 2 E is of the form
l =! (i
Suppose ~ p and ~q 2 P are two processors communicating through l. Our aim is to find an
upper bound on the number of pairs of processors communicating through l. A moment of
thought reveals that ~ p and ~q must have either arbitrary or q
arbitrary, for j 6= s. This means the number of possible choices in dimension j is less than
2k. Hence, the total number of choices for all of the coordinates of ~p and ~q excluding s is
less than 2 and ~q are both in P they satisfy
and
There are at most one solution pair for each one of 2 choices (as before, the co-ordinates
in the s-th dimension are restricted by the conditions p s
Therefore, the total number of processor pairs communicating
through l is bounded by 2
f
~ p2P;~q2P
which is linear in jP fixed d. 2
8.2 Multiple Linear Placements with UDR
Theorem 5 Multiple linear placements along with UDR algorithm on T d
k results in linear
load on edges.
Proof We have As before, consider a processor pair ~p , ~q 2 P which
communicate using l where
The number of choices for processor pairs using l is strictly less than 2 as in the
case of linear placements with UDR. Since there are t equations for each of ~ p and ~q , there
are t 2 solutions for every one of 2 choices of pairs. Therefore the number of pairs of
processors communicating through l is less than t which is linear in jP j for any
fixed constant t ! k. 2
9 Conclusion
Following the work of Blaum, Bruck, Pifarr'e, and Sanz [3, 4], we have considered communication
in partially-populated torus networks in terms of placements of processors and
associated routing algorithms. We have provided lower bounds for the maximum load under
the all-to-all communication scenario, and found bounds on the size of an optimal place-
ment. We have shown that arbitrary placements can be bisected by removing a set of edges
of the same order as the bisection width of the torus. We then provided optimal placements
of size \Theta(k d\Gamma1 ) on the d-dimensional k-torus using what we call linear and multiple linear
placements, and gave load analyses of each under two different routing algorithms.
There are some interesting combinatorial properties of placements still to be resolved.
Among these are the characterization of optimal placements in terms of restrictions to
subtori and an extensive analysis of the properties of edge separators of tori relative to
optimal placements.
--R
The Tera Computer System.
Resource Placement in Torus-Based Networks
On Optimal Placements of Processors in Tori Networks.
On Optimal Placements of Processors in Fault-Tolerant Tori Networks
Lee Distance and Topological Properties of k-ary n-cubes
Direct Bulk-Synchronous Parallel Algorithms
Introduction to Parallel Algorithms and Architectures: Arrays
The Theory of Error-Correcting Codes
A Survey of Wormhole Routing Techniques in Direct Networks.
Analysis of a 3D Toroidal Network for a Shared Memory architecture.
Efficient Communication Using Total Exchange.
Routing on Triangles
A Bridging Model for Parallel Computation.
--TR | routing;torus;interconnection network;edge separator;bisection;placement |
334348 | A Minimal Universal Test Set for Self-Test of EXOR-Sum-of-Products Circuits. | AbstractA testable EXOR-Sum-of-Products (ESOP) circuit realization and a simple, universal test set which detects all single stuck-at faults in the internal lines and the primary inputs/outputs of the realization are given. Since ESOP is the most general form of AND-EXOR representations, our realization and test set are more versatile than those described by other researchers for the restricted GRM, FPRM, and PPRM forms of AND-EXOR circuits. Our circuit realization requires only two extra inputs for controllability and one extra output for observability. The cardinality of our test set for an $n$ input circuit is ($n+6$). For Built-in Self-Test (BIST) applications, we show that our test set can be generated internally as easily as a pseudorandom pattern and that it provides 100 percent single stuck-at fault coverage. In addition, our test set requires a much shorter test cycle than a comparable pseudoexhaustive or pseudorandom test set. | INTRODUCTION
The large increase in the complexity of ASICs has led to a
much greater need for circuit testability and Built-In-Self-Test
(BIST) [1]. The testability properties of different forms of two-level
networks have attracted many researchers
[2, 3, 4, 5, 6]. The forms investigated include Positive Polarity
Reed-Muller (PPRM) [2], Fixed Polarity Reed-Muller (FPRM)
[6], Generalized Reed-Muller (GRM) [7], and EXOR-Sum-of-
Products (ESOP) [3]. All of the canonical Reed-Muller forms
(PPRM, FPRM, and GRM) have restrictions on the allowed polarities
of variables or on the allowed product terms. ESOP, on
the other hand, has no restrictions, and is formed by combining
arbitrary product terms using EXORs [3, 7]. Therefore, ESOP
is the most general form of 2-level AND-EXOR networks.
The Reed-Muller expansion of an arbitrary function is:
, where x
is a literal term that can be a variable
xn or its negation xn , and c n
is a constant term that
can be '0' or `1'. PPRM, also called the Reed-Muller form,
is the most restricted form in that only positive polarities are
allowed for input variables. FPRM allows only one polarity
for each input variable. GRM has no restrictions on the
allowed polarities of variables but does not allow the same
U. Kalay, Marek A. Perkowski, Douglas V. Hall are with the Dept. of
Electrical and Computer Engineering, Portland State University, Port-
land,
E-mail: fugurkal,mperkows,doughg@ee.pdx.edu.
For more information on obtaining reprints of this article, please send e-mail
to: tc@computer.org, and reference IEEECS Log Number 107101.
set of variables in more than one product term. For exam-
since the variables
appear with only positive polarities in the expression.
is an FPRM because negative polarity
exists and each variable appears with only one polar-
either negative or positive.
is a GRM because the variable x 2 appears with both positive
and negative polarities. 3 is not a
GRM but an ESOP because the same set of variables are used
in more than one product term. We can write the following
inclusion relationship for ESOP and the Reed-Muller forms;
PPRM FPRM GRM ESOP .
Due to the total freedom of input polarity and product term
selection, the minimum number of product terms required to
represent an arbitrary function in ESOP form can never be
larger than the minimum number of product terms in any of the
canonical Reed-Muller forms [3]. This fact can be seen from
the arithmetic benchmark circuits given in Table 1, which was
presented in [7]. Notice that PPRM yields the largest number
of product terms since it is the most restricted form of AND-
EXOR networks. In most cases, an ESOP realization gives a
significantly smaller number of product terms even over the
least restricted Reed-Muller form, GRM. This observation provides
strong motivation for developing a testable ESOP implementation
Another aspect of AND-EXOR representations presented in
Table
1 is that AND-EXOR representations (especially ESOP)
usually yield fewer product terms than a SOP representation.
As we will illustrate later, this may be an area and delay advantage
when realizing the function in 2-level form.
Our main contributions described in this paper are a highly
testable ESOP realization and a minimal universal test set that
detects all possible single stuck-at faults in the entire circuit,
including the faults in the primary input and output leads. Another
contribution of our work is a special built-in pattern gen-
Function PPRM FPRM GRM ESOP SOP
log8 253 193 105 96 123
Table
1: The number of product terms required to realize some
arithmetic functions for different forms.
erator, which gives 100% fault coverage for single stuck-at
faults and has a much shorter testing cycle than a PseudoExhaustive
or Pseudo-Random Pattern Generator (PRPG). In
addition, the hardware overhead for our special pattern generator
is comparable to that of Linear Feedback Shift Register
(LFSR) based pattern generators, such as PRPG.
The organization of this paper is as follows. In Section 2,
some background on previous researchers' work is given. Section
3 describes our testable realization and the test set for it.
In Section 4, we give a preliminary circuit, which can be used
to generate our test set for BIST applications. In Section 5,
we present our experimental results from area, delay, and test
set size measurements performed on some benchmark circuits,
along with the comparisons of our scheme with other schemes.
Section 6 summarizes our results and gives some possible directions
for future work.
Reddy showed that a PPRM network can be tested for single
stuck-at faults with a universal test set that is independent of the
function being realized [2]. Figure 1a shows an EXOR cascade
implementation of the PPRM expression
In normal mode of operation, the control input, c, is
set to the constant term in the functional expression (in this
example: '0'). In testing mode, c is set according to the test set
given in Figure 1b.
The four tests in test set T 1 detect all single stuck at faults in
the EXOR cascade by applying all input combinations to every
gate, independent of the number of EXOR gates in the
cascade. The test vector h1111i in T 1 and the walking-zero test
vectors in test set T 2 detect a single stuck-at fault in the AND
part of the circuit. Since the number of vectors in T 1 is always
equal to 4 and the number of vectors in T 2 is always equal to
the number of input variables, n, the cardinality of Reddy's
universal test set is (n+4).
Reddy's technique is good for self testing because as shown
by Daehn and Mucha [8], the entire test sequence can be inexpensively
generated by a modified LFSR using a NOR gate
and a shift register. However, as shown in Table 1, the number
of terms in a PPRM is usually higher than the number of
FPRM or GRM terms, and much higher than the number of
ESOP terms [5]. For FPRM networks, Sarabi and Perkowski
showed that by just inverting the test bits for the variables that
(a)
(b)
Figure
1: (a) A PPRM network implemented according to
Reddy's scheme given in [2]. (b) Reddy's test set for the PPRM
implementation.
are negative polarity in the FPRM, Reddy's test set can be used
for single fault detection in a FPRM network [6]. They also
showed that a GRM network can be decomposed into multiple
FPRMs. This way, each FPRM can be tested separately to test
the combined GRM circuit for single fault detection. The size
of the test set, the worst case, is the number of FPRMs times
(n+4). This method, however, does not yield a universal test
set.
Other researchers have investigated multiple fault detection
in AND-EXOR circuits. Sasao recently introduced a testable
realization and a test set to detect multiple faults in GRM networks
As shown in Figure 2a, Sasao uses an extra EXOR
block, called the Literal Part, to obtain positive polarities for
any negated variables and convert a GRM network into a PPRM
network. When the control input c is set to '1', the shaded
part of the circuit in Figure 2a realizes the GRM expression
. The Check Part of the circuit in Figure
2a is added to test the literal part. Sasao implements the
EXOR-Sum of the product terms with a tree structure instead
of a cascade to obtain a less circuit delay. Nevertheless, his
scheme does not lead to a universal test set. Furthermore, his
scheme cannot be used for ESOP circuits because the conversion
of an ESOP into PPRM by the literal part may produce an
AND-EXOR expression that has multiple product terms with
the same set of variables, which is not a PPRM form. For
example, given the circuit in Figure 2b, the ESOP expression
after the conversion
from GRM into PPRM; one for x 1 x 2 x 3 and the other
Pradhan also targeted detection of multiple stuck-at faults in
circuits [3]. As shown in Figure 3, he does the
negation of the literals by using an extra EXOR block called
the Control Block. He uses cascaded AND gates in a Check
Block to detect the faults in the control block. Figure 3 shows
c
f
Literal
Part
Check
Part
c
f
(a)
(b)
Figure
2: (a) Sasao's GRM realization scheme, (b) Realizing
an ESOP circuit with Sasao's scheme.
Pradhan's testable ESOP implementation for the function
implemented the EXOR-
Sum of the product terms with a cascade structure.
f
Control
Block
Check
Block
c
Figure
3: Pradhan's testable ESOP implementation.
Pradhan introduced a test set to detect all of the multiple
faults in his testable realization for ESOP expressions. How-
ever, his test set is not universal and is too large to be practical
for single fault detection. The cardinality of Pradhans test set
is:
e
where j is the order of the ESOP expression. The order is simply
the maximum number of literals contained in any of the
product terms. Notice that the complexity of the test is exponential
with respect to the number of literals in product terms.
Furthermore, if a product term has all possible literals, the test
set is even larger than exhaustive due to the additional test
inputs required. For example, if there are four variables in an
ESOP expression (n=4), and the order of the expression is 4
(j=4), the exponential term in the formula;X
e
which is exhaustive. The size of the entire test set then is,
e
In this section, we introduce an improved testing scheme to
detect single stuck-at faults not only in the internal lines of the
circuit, but also in the primary inputs and outputs of the most
general AND-EXOR circuits, ESOP.
3.1 Easily Testable ESOP Implementation
Figures
4 shows a new testable implementation for an ESOP
expression. The circuit has up to two extra observable outputs,
and two additional control inputs, c 1 and c 2 . The
Literal Part, named after the similar part in Sasao's testable re-
alization, is added to convert the ESOP circuit into a positive
polarity AND-EXOR expression during testing. We do not refer
to the expression after conversion as a PPRM because, as
mentioned earlier, it may have some repeated product terms
with the same set of variables. The positive polarity AND-
EXOR expression cannot be tested by Sasao's multiple fault
detection scheme [7], but can be tested by our single fault detection
scheme. The AND Part and the Linear Part implement
the desired ESOP expression as the c 1 control input is set to
'1'. The f output is implemented with an EXOR cascade so
that Reddy's universal tests for PPRM can be used for our real-
ization. For the same reason, the Check Part, which is required
to test the literal part, is implemented with an EXOR cascade.
The gates marked with 'A' and `B' are added for the detection
of faults in the primary inputs, and the control input c 1 . They
are required based on the function being implemented. We will
later describe the cases where gates 'A' and `B' are required
and how each section of our realization is tested when we explain
our test set in the next sections.
f
AND
Part
Literal
Part
Linear
Part
Check
Part
A
Figure
4: Easily testable ESOP circuit.
Hayes used mainly EXOR gates as additional circuitry to
make a logic circuit easily testable [9]. Likewise, in our real-
ization, we mainly use EXOR gates in the additional circuitry
to take advantage of the superior testability properties of the
EXOR gate. This allows us to obtain a minimal and universal
test set.
3.2 The Fault Model
A fault model represents failures that affect functional behavior
of logic circuits [10]. In a stuck-at fault model of a TTL
AND gate, for example, an output could become shorted to V cc
This can be modeled as a stuck-at 1 (s-a-1) fault. In MOS tech-
nology, most of the probable faults are opens and shorts, which
can also be modeled with the appropriate s-a-0 or s-a-1 fault
[11].
We follow the same approach taken by the previous researchers
presented in Section 2, and assume a single stuck-at
fault model, which allows only one stuck-at fault in the entire
circuit. We also adopt their testing model for the detection of
stuck-at faults in individual logic gates. An n-input AND gate
test vectors to detect a single stuck-at fault in
its inputs or output. The tests for a 3-input AND gate, for ex-
ample, would be: f111, 011, 101, 110g. In this test set, h111i
detects a s-a-0 fault on any of the inputs or the output, and the
remaining tests, commonly referred to as walking-zero tests,
detect a s-a-1 on any of the inputs and the output of the gate.
Some researchers (i.e. Pradhan [3], and Saluja, et al. [12])
analyzed in detail the fault characteristics of the EXOR gate
implementation shown in Figure 5. They considered the faults
in the internal lines of the EXOR gate as well as the faults in
the inputs and the output of the EXOR gate. Table 2 shows the
possible functions that a 2-input circuit can implement. The
circuit in Figure 5 realizes the function g 1 , EXOR. If only a
single stuck-at fault can occur in this implementation, the gate
produces one of the 10 functions in Class A (g 2 11 ), and it can
never produce the functions in Class B (g 12
a mutually exclusive list of the faults in Class A as they are
detected for each test applied to an EXOR gate.
Figure
5: The EXOR implementation assumed by Pradhan and
Reddy (et al.
Inputs Class A Class B
Table
2: Functions that a 2-input logic gate can implement.
For our work we use the EXOR model in Figure 5, and the
exhaustive test set in Table 3 to detect a single stuck-at fault in
this model. By setting the c 1 and c 2 inputs in the proposed realization
to the appropriate logic values, Reddy's four tests provide
all input combinations for each EXOR gate in the EXOR
cascades (the linear part and the check part) of our realization.
Detected
Table
3: The faults exclusively detected by all of the input vectors
applied to an EXOR.
3.3 The Test
3.3.1 Fault detection in the internal lines of the realization
The linear part of the proposed ESOP circuit can be tested by
test set. During the testing of this part, the AND
gate inputs are either all 0's or all 1's. This makes the AND
part transparent to the linear part because all-0's or all-1's are
transferred to the external inputs of the EXOR cascade in the
linear part. The network response to the test vectors is observed
from the function output f . The test set for the linear part, T a ,
is given below.
During the testing of the linear part, the check part of the circuit
receives the same test vectors given in T a , and is therefore
tested at the same time. However, for the check part, output
observed instead of output f for the response to the test
vectors.
The AND part of the circuit is tested for single stuck-at
faults in the same way as in Reddy's scheme. The test vectors
are applied to the primary inputs and transferred to the AND
part by setting c 1 control input to '0'. The test set T 2 is applied
to detect a s-a-1 in any input and the output of the AND
gates. The complete test set for the AND gates, T b , is obtained
as below by including the test vector h0-11 1i to detect a s-a-
0 in any of the inputs and the outputs of the AND gates. A '-'
denotes a don't care logic value.
The literal part is tested through the check part and the extra
observation output, Again, in the case of a fault, any logic
change in the output of the EXOR gates in the literal part is
propagated to the observable output . The four tests given
in T c apply all input combinations to each EXOR gate in the
literal part.
3.3.2 Fault detection in the circuit input/output leads
The primary inputs that are applied to the literal part are tested
through the path formed by the literal part, check part, and the
observable output . The required test set of this case, T d ,
is given below. The first vector detects a s-a-1 fault, and the
second vector detects a s-a-0 fault.
The primary inputs that are not applied to the literal part
(when used only in positive polarity in the expression), but that
are applied an odd number of times to the AND part are tested
through the path formed by the AND part, the linear part, and
the function output f . The required test set, T e , is given below.
Any stuck-at fault in the primary inputs of this class causes an
odd number of changes in the external inputs of the linear part,
and is detected by observing the function output f .
The primary inputs that are not applied to the literal part,
and are applied an even number of times to the AND part cannot
be tested with the above test set because an even number of
value changes cannot be propagated to the output by the EXOR
cascade in the linear part. Therefore, the additional AND gate
'A' with the observable output required for the primary
inputs of this class. The same scheme is described by Reddy in
[2]. However, the chance of having this extra AND gate is less
likely in our scheme because of the alternative path from the
primary inputs to the observable output when the primary
inputs are applied to the literal part. The required test vectors
for the primary inputs of this class and for the observable output
are covered by the required test vectors to test the extra
AND gate 'A', which are given in T f . Notice that the faults
in the primary inputs and the faults in the AND gate inputs are
equivalent; and similarly, the faults in the observable output
and the faults in the AND gate output are equivalent.
The faults in the control input c 1 are detected through the
path formed by the literal part, check part, and the observable
output Detection through the path to the function output f
cannot be guaranteed because it is dependent on the function
being implemented. If the number of the literal part outputs
(the number of EXOR gates in the literal part) is an odd num-
ber, the extra EXOR gate 'B' is not required and c 2 is by-passed
to the output of this extra gate. Note that all of the literal part
outputs will change at the same time in case of a fault in c 1 .
Therefore, if the number of changes at the output of the literal
part is an odd number, the EXOR cascade in the check part will
propagate the fault to . The required test set, T g
, is given
below. The first vector detects a s-a-1, and the second vector
detects a s-a-0 in c 1 .
If the number of the literal part outputs is an even number,
then the extra EXOR gate 'B' is required to make the number
of changes fed into the check part an odd number. This configuration
also allows the use of the same test set, T g , above
for the detection of faults in c 1 . However, the extra EXOR gate
needs to be tested, as well. The test set for this EXOR gate, T h ,
is exhaustive by its testing model, and given below.
The faults in primary output f and the control input c 2 are
covered by the test set T a of the linear part due to fault equiv-
alence. Similarly, the faults in the observable output are
covered by the test set T a of the check part.
3.3.3 The complete test set
Theorem: An ESOP circuit with the realization in Figure 4 can
be tested for single stuck-at faults in its internal lines and in its
input/output leads requiring a test set of (n cardinality.
Proof: A test set, T, that covers all of the test sets above, T a h ,
detects a single stuck-at fault in the entire circuit. The minimal
test set then is:
The cardinality of the minimal test set is obtained as follows.
include T a in T . (4 tests),
combine the last two vectors of T c and T h
and include in T . (2 tests),
include the first n vectors of T b in T . (n tests).
The remaining tests are covered by T as follows:
the last vector of T b , the first two vectors of T c , the test
set T d , the test set T e , the last vector of T f , the first vector
of T g , the first vector of T h are covered by the first two
vectors of T a , which are included in T.
the first n vectors of T f are covered by the first n vectors
of T b , which are included in T.
the second vector of T g is covered by the last vector of
c , which is included in T.
the second vector of T h is covered by the third vector of
T a , which is included in T.
The final test set T:
The combining process over two test vectors is done by replacing the don't
care values of the first test vector with the determined values of the second test
vector.
Although we did not prove that there is not a shorter universal
test set than (n general ESOP, this result is very
close to the lower bound of the length of a universal test set, (n
networks [2, 13, 14]. Note that
by modifying Reddy's test set based on the function being re-
alized, Kodandapani introduced a test set with (n+3) tests [15],
but his test set is not universal.
3.4 Example
Figure
6 shows our testable realization for the ESOP expression
. In this example,
the extra observable output required for the detection of
the faults in primary input x 1 since x 1 is not applied to the literal
part and it is used an even number of times in the AND
part. The primary input x 5 is not applied to the literal part, ei-
ther, but it is used an odd number of times in the AND part.
Note that the extra AND gate 'A' is not required since there is
only one primary input to observe at . The extra EXOR gate
'B' is also not required because the number of EXOR gates in
the literal part is an odd number.
f
Figure
An example testable ESOP realization.
The test set for the example implementation is:
CLR
CLR
CLR
CLR
Rst
Part II
Part I
Figure
7: An example EDPG circuit implementation.
A traditional, signature analysis based Built-in Self Test
(BIST) circuitry for a combinational network consists mainly
of a pattern generator, and a signature register. For the complete
testing system, a BIST controller, some multiplexers, a
comparator, and a ROM are also embedded inside the chip.
Using a Linear Feedback Shift Register (LFSR)-based PseudoExhaustive
or Pseudo-Random Pattern Generator (PRPG) is a
well-known method for generating test patterns for very large
and complex combinational circuits. The PRPG approach is
used because it is difficult to otherwise generate the large and
irregular test sets required by such combinational circuits. As
shown by Drechsler (et al.) in [16], the PRPG approach does
not work better with AND-EXOR circuits than it does with
equivalent SOP circuits. However, they show that AND-EXOR
networks do have good deterministic testability performance.
Considering this fact and the properties of our design, we can
list the reasons why we propose a deterministic test generation
for built-in self test of ESOP circuits as below;
1. Our ESOP realization is designed for testing and therefore
requires a minimal test set. Traditional PRPG test
length is much longer than our test set for the same fault
coverage. Also, there is no need for partitioning the circuit
to prevent the long cycles of a pseudo-exhaustive test
generation.
2. Our test set is universal, which allows it to be generated
by fixed hardware that can be used for any function.
3. Our test set has regular patterns and therefore easy to
generate. As a result, the area overhead of our pattern
generator in our scheme is comparable to that of PRPG
based schemes.
4. Our test set gives 100% fault coverage for single fault
detection and does not require fault simulation.
Figure
8 shows the BIST circuitry for ESOP circuits. In
this structure, the only difference from classical BIST circuitry
is the ESOP Deterministic Pattern Generator (EDPG) that is
introduced for our easily testable ESOP implementation. The
results from the applied test vectors are collected from the function
output f and from the extra observable outputs
then compressed in the signature register, which can simply be
an LFSR based Multiple Input Signature Register (MISR) [1].
After all the tests are applied, the signature register content is
compared with the correct signature of the implemented ESOP
to generate a go/no go signal at the end of the test cycle.
Easily Testable 2-level ESOP Network
Correct Signature Compare go / no go
MISR
Figure
8: The BIST circuitry for highly testable ESOP circuits.
A real life circuit is more likely to have multiple outputs
rather than a single output as shown in the earlier examples. In
this case, the AND gates in the AND part (the product terms)
are distributed over multiple linear parts (EXOR cascades) for
multiple outputs, and therefore the faults must be observed
from all circuit outputs. Also, for a multi-output circuit, all
the function outputs should be applied to the MISR along with
the extra observable outputs.
Daehn and Mucha designed a simple BIST circuit to test
PLAs [8]. They used LFSRs and NOR gates to generate regular
test patterns such as a walking-one test sequence. Similarly, our
EDPG can be built to generate the walking-zero sequence along
with the extra c 1 and c 2 bits, as shown in Figure 7.
Part I of EDPG generates the walking-zero portionof the test
vectors. This portion of the BIST circuitry can be expanded linearly
based on the number of inputs in the ESOP circuit. Part
II of the EDPG is a Finite State Machine (FSM), and generates
c 1 and c 2 bits of the test vectors. It also provides CLR and
SET signals for the D-Flip-flops in Part I to generate all-0 or
all-1 bits in the first six test vectors of the test set, T. Part II
is independent of the function being realized, and therefore is
fixed size. Figure 9 gives the state diagram and the circuit implementation
for the FSM in Part II. The FSM generates the six
vectors of the test set, then stops and enables Part I to generate
the walking-zero tests. Figure 10 gives the simulation results
for the EDPG implementation.
CLR
CLR
CLR
CLR
Vcc
Rst
Reset
Test
Test
Test
Vector 3
Stop
Test
Vector 6
Test
Vector 5
Test
Vector 4
Figure
9: State diagram and the circuit implementation for Part
II of EDPG.
Rst
000 001 010 011 100 101 110 111
State
CLR
Figure
10: Simulation of the EDPG circuit.
We performed area, delay, and test set size measurements
on some benchmark circuits using our realization scheme and
2-level/multi-level synthesis schemes. We selected the circuits
from LGSynth'93 and Espresso benchmark sets to provide a
wide variety of function types. For example, we selected circuits
with different numbers of primary inputs; implementations
in 2-level or multi-level; and of different classes such as
math, logic, etc.
All the circuits were optimized for area and mapped to a
technology library before performing the measurements. SIS
[17] was used to optimize and synthesize the circuits in multi-
level; and Exorcism [18] was used to optimize ESOP expres-
sions. For a multi-level benchmark circuit, we used SIS to obtain
the equivalent two-level SOP expression; and used Disjoint
[19] to convert it to a two-level AND-EXOR expression before
optimizing with Exorcism.
A 0.5 micron, array-based library developed by LSI Logic
Corp. [20] was used for synthesis. We limited the number of
components in the library according to Table 4. All area measurements
are expressed in cell units, excluding the interconnection
wires.
Area Delay(ns)
Component Function (cells) Block Fanout
Table
4: The technology library used in measurements.
Table
5 gives the number of test vectors and the fault coverage
obtained from different schemes for single faults. This table
compares our test scheme with LFSR based pseudo-random
test generation and with algorithmic test generation. Pseudo-random
and algorithmic test vectors were generated for the
multi-level implementation of the circuits, where our test set
was generated (predetermined) for the easily testable ESOP im-
plementation. SIS was used for algorithmic test generation. Up
to 10,000 LFSR patterns were generated for each circuit using
the program used in [21]. The goal was to cover a wide range
of circuits to determine circuits that are random vector resistant
and require more than 10,000 patterns. Almost half of the
selected benchmark circuits required more than 10K pseudo-random
patterns for 100% fault coverage, whereas our scheme
required no more than 150 patterns for all the circuits. For ex-
ample, the fault coverage for the circuit x9dn is 73.1% for 10K
random patterns. In comparison, our scheme yields only 33
patterns for 100% fault coverage. In all circuits, our test set
is smaller than either pseudo-random or algorithmic test sets.
Also, note that algorithmic test sets are generally not universal
and therefore cannot be utilized in a simple self-test circuit.
The next measurement was performed to see the testability
Multi-level Our ESOP
Implementation Implementation
Pseudo-random Test Set Algorithmic Test
Number of (LFSR) (SIS) Our Test
Primary Total Fault Fault Fault
Circuit Inputs #Tests Undetected Faults Coverage #Tests Coverage #Tests Coverage
9symml 9 512 0 513 100 137 100 15 100
apex5 117 10K 979 4129 74.6 1245 100 121 100
apex6 135 10K 27 1680 98.3 400 100 141 100
ex4 128 10K 92 1042 91.1 474 100 134 100
mux
Table
5: Comparisons of the number of test vectors for different circuits.
improvement of the proposed ESOP implementation that requires
some additional gates and input/output pins (labeled in
Table
6 as: "ESOP with DFT (Design for Test)"), over the ordinary
2-level ESOP implementation that does not include any
additional hardware (labeled in Table 6 as: "Ordinary ESOP").
The test vectors for the ordinary ESOP implementation were
algorithmically generated with the program used in [22], and
compared with the universal test set of our implementation
scheme. In Table 6, our test set is significantly smaller than
the algorithmically generated test sets for the majority of the
benchmark circuits. Only one of the benchmark circuits, alu1,
yielded fewer number of algorithmically generated test patterns
than our universal test set. However, as mentioned earlier, our
test set is universal, which eliminates the need for test generation
programs; and it has regular patterns, which can be generated
easily. As a result, it is very suitable for BIST.
We did not generate pseudo-random vectors for our implementation
scheme as an alternative to our EDPG for three rea-
sons. First, the fact that AND-EXOR circuits are not more
testable with pseudo-random patterns than AND-OR circuits
was shown by Drechsler (et al.) in [16]; second, our ESOP
implementation is constructed considering certain regular and
minimal patterns, and therefore it requires those patterns for
guaranteed 100% fault coverage; and third, as shown next, the
area overhead for our EDPG is very close to that of LFSR based
PRPGs.
In
Table
8, the area of different patterns generators is given
in cell units based on the library components given in Table 4.
The total BIST area is not calculated for comparisons because,
as mentioned earlier, the only difference between a classical
BIST circuitry and the ESOP BIST circuitry is the pattern generators
used in them. We selected circuits with a wide range of
number of primary inputs since the area of a pattern generator
Number of Algorithmic / Our Scheme /
Primary Ordinary ESOP with
Circuit Inputs ESOP DFT
9symml 9 181 15
a04 9 173 15
sse 11
Table
Number of deterministic test vectors for two ESOP
implementations.
is directly related to the number of primary inputs. An LFSR
based pseudo-exhaustive or pseudo-random pattern generator
mainly consists of D-Flip-flops and EXOR gates [23, 24]. The
number of 2-input EXOR gates changes typically from one to
the number of primary inputs (n), based on the characteristic
polynomial used to generate the patterns. Therefore, the area
of an LFSR-based PRPG is given as a range in the table. The
area of a BILBO register is also included in the table since it is
used for pseudo-random pattern generation [25].
As shown in Table 8, the area of EDPG is comparable to
those of other pattern generators for all benchmark circuits. It
is always better than BILBO's if n 8, and is in the range
of PRPG's if n 48. For example, for the circuit rd73, the
BILBO register is smaller than EDPG, but 128 pseudo-random
Multi-level 2-level
Using All Using AND2, Our
Number of Number of Components OR2, INV SOP ESOP
Primary Primary Area Area
Circuit Inputs Outputs Area Delay Area Delay #Terms Area Delay #Terms (Function) (DFT) Delay
apex6 135 99 694 2.39 1537 3.57 657 3548 2.22 408 4127 221 13.08
ex4 128 28 439 2.11 1029 3.51 559 3925 2.33 317 3829 299 14.47
Table
7: Area and delay comparisons for different implementation schemes.
Num. of Area of Area Area
Primary PRPG (LFSR) of of
Circuit Inputs 1 EXOR n EXORs BILBO EDPG
apex1
apex5 117 1056 1404 1935 1335
Table
8: Area measurements in cell units for different pattern
generators.
patterns are required for 100% fault coverage, where only 13
EDPG patterns are required for the same fault coverage. Another
advantage of EDPG is that it does not need an initialization
seed, unlike most of the LFSR based pattern generators
that require one or more seeds. We did not include the area
overhead of the additional hardware that provides the initialization
for the PRPGs in Table 8.
Table
7 presents the results of another set of measurements
to show the area and delay performance of our ESOP implementation
in Figure 4 as compared to the multi-level and 2-
level SOP implementations. The area information is separated
into two parts for the ESOP implementation. Those are, the
area required for the implementation of the function being implemented
(the literal part, the AND part, and the linear
and the area required for the gates added for better testability
(the gates 'A' and `B', and the check part), denoted in the table
as "Design for Testing" (DFT).
The measurements on multi-level implementations are performed
for two different cases: one by using the entire set of
library components presented in Table 4, and the other by using
only AND2, OR2, and INV gates. These two different measurements
were performed to see the variations in area and delay
as the components in the targeted library were changed for
synthesis. This provides a more objective comparison.
A column is provided for 2-level SOP implementations to
evaluate our design in the PLA environment. The delay information
for a 2-level SOP circuit is calculated by assuming
a tree-of-OR-gates structure (using 3-input and 2-input OR
gates) to combine the product terms. Similarly, the AND gates
with more than three inputs in both SOP and our ESOP implementations
were implemented as a tree of smaller AND gates
(3-input and 2-input). The tree structure assumption does not
affect the testing of the AND gates in our ESOP scheme. Another
comparison of 2-level SOPs and ESOPs is given by Saul
(et al.) in [26] for PLA and XPLA implementations.
Although it is not fair to compare a cascade implementation
to a tree-like implementation, Table 7 shows that our 2-level
ESOP implementation is comparable to multi-level implementations
in most of the cases. For example, the ESOP implementations
of apex5 and f51ml have better delay than multi-level
implementations. Also, adr4, alu1, mux, x2, x4, and x2dn have
fairly low delays when implemented with our ESOP scheme.
In a few cases, our ESOP scheme yielded significantly larger
delays than multi-level and 2-level SOP implementations, such
as for the circuits alu4, x1, and x9dn.
Similarly, the ESOP circuits implemented for alu2, alu4, and
f51ml have areas between the area of their multi-level version
implemented using all library gates and the area of their multi-level
version implemented using only AND2, OR2, and INV
gates. Also the areas of 9symml, alu1, and rd73 are very close
to those of their multi-level versions.
As 2-level implementation comparisons, for 50% of the
benchmark circuits, our ESOP implementation scheme yielded
smaller area than 2-level SOP implementation. Especially for
circuits rd73,alu2,f51ml, and alu4 the areas of SOP implementations
are 3.65, 3, 2.01, and 1.8 times larger, respectively, than
those of ESOP implementations.
The area overhead for DFT in our ESOP implementation is
typically less than 0.1% (the least,
the largest area overhead, 28%, was obtained for alu1 since the
functionality of the circuit is relatively small in comparison to
those of the other benchmark circuits.
6 CONCLUSIONS AND FUTURE
RESEARCH
In this paper, we have shown a highly testable ESOP realization
and a minimal universal test set for the detection of single
stuck-at faults in both internal lines and primary inputs/outputs
of the circuit with 100% fault coverage. An EXOR cascade
is used in the check part instead of AND gates or OR gates because
the EXOR cascade yields much fewer test vectors and to-
day's advanced technology makes it possible to have an EXOR
gate almost as fast as an AND gate.
The experimental results show that our test set is always
smaller in exponential degree than pseudo-random test set, and
smaller in multiples than an algorithmically generated test set
for 100% single stuck-at fault coverage. A deterministic test
pattern generator is presented to be used as a part of the built-in
self-test circuitry. The experimental results show that the over-all
overhead of our BIST circuit is comparable to that of the
traditional PRPG based BIST circuit. More importantly, our
pattern generator is superior to a PRPG because of its 100%
single fault coverage and much shorter testing cycle. Also, it
does not require an initialization seed and the circuitry for generating
it. The results also show that our 2-level ESOP implementation
is comparable to (or in some cases better than) the
multi-level and 2-level SOP implementations in the area and
delay measurements. Furthermore, our implementation gives a
very small DFT area overhead.
In addition to detecting all single stuck-at faults, our architecture
and test set detect a significant fraction of multiple
stuck-at faults. More tests can be added to improve the multiple
fault coverage even though it is very unlikely that a minimal
and universal test set can be found. For instance, more zero-
weighted test vectors can be added to improve the multiple fault
coverage for the AND part of the circuit as explained by Saluja
and Reddy in [12]. Another method to improve the fault coverage
is to detect multiple faults with the help of multiple outputs
in a multi-output ESOP circuit. Our BIST methodology with
an MISR is an ideal method for this purpose. Our test set can
also be improved for bridging faults using a method similar to
that presented by Bhattacharya (et al.) in [27]. We are currently
investigating our test set and implementation scheme for
detecting bridging faults and multiple faults. The results will
be presented in our next paper.
Acknowledgments
The authors would like to thank Prof. W. Robert Daasch at
Portland State University for helping with area calculations;
Craig M. Files at Portland State University for a review of the
Prof. Nur A. Touba at Univ. of Texas at Austin for providing
the program used in [21]; and Prof. L. Jozwiak and A.
Slusarczyk at Eindhoven University for providing the program
used in [22] for our measurements.
--R
Digital Systems Testing and Testable Design.
"Easily Testable Realizations for Logic Functions,"
"Universal Test Sets for Multiple Fault Detection in AND-EXOR Arrays,"
Logic Testing and Design for Testability.
Logic Synthesis and Optimization.
"Design for Testability Properties of AND-EXOR Networks,"
"Easily Testable Realizations for Generalized Reed-Muller Expressions,"
"A Hardware Approach to Self-Testing of Large Programmable Logic Arrays,"
"On modifying logic networks to improve their diagnosability,"
Testing and Diagnosis of VLSI and ULSI
New York: John Wiley
"Fault Detecting Test Sets for Reed-Muller Canonic Networks,"
"Fault Detection in Combinational Networks by Reed-Muller Transforms,"
"On Closedness and Test Complexity of Logic Circuits,"
"A Note on Easily Testable Realizations for Logic Functions,"
"Testability of 2-level AND/EXOR Circuits,"
"SIS: A System for Sequential Circuit Synthesis,"
"Minimization of Exclusive Sum of Products Expressions for Multi-Output Multiple-Valued Input, Incompletely Specified Functions,"
"An Algorithm for the Generation of Disjoint Cubes for Completely and Incompletely Specified Boolean Functions,"
LSI Logic Corporation
"BETSY: Synthesizing Circuits for a Specified BIST Environment,"
"Term Trees in Application to an Effective and Efficient ATPG for AND-EXOR and AND-OR Circuits,"
"Circuits for Pseudoexhaustive Test Pattern Generation,"
"Built-In Logic Block Observation Technique,"
"Two-level Logic Circuits Using EXOR Sums of Products,"
"Testable Design of RMC Networks with Universal Tests for Detecting Stuck-At and Bridging Faults,"
--TR
--CTR
Hafizur Rahaman , Debesh K. Das , Bhargab B. Bhattacharya, Testable design of GRM network with EXOR-tree for detecting stuck-at and bridging faults, Proceedings of the 2004 conference on Asia South Pacific design automation: electronic design and solution fair, p.224-229, January 27-30, 2004, Yokohama, Japan
Hafizur Rahaman , Debesh K. Das, Bridging fault detection in Double Fixed-Polarity Reed-Muller (DFPRM) PLA, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Katarzyna Radecka , Zeljko Zilic, Design Verification by Test Vectors and Arithmetic Transform Universal Test Set, IEEE Transactions on Computers, v.53 n.5, p.628-640, May 2004 | Built-in Self-Test BIST;easily testable combinational networks;test pattern generation;Design for Testing DFT;AND-EXOR realizations;universal test set;reed-muller expressions;self-testable circuits;single stuck-at fault model |
334432 | Fast Approximation Methods for Sales Force Deployment. | Sales force deployment involves the simultaneous resolution of four interrelated subproblems: sales force sizing, salesman location, sales territory alignment, and sales resource allocation. The first subproblem deals with selecting the appropriate number of salesman. The salesman location aspect of the problem involves determining the location of each salesman in one sales coverage unit. Sales territory alignment may be viewed as the problem of grouping sales coverage units into larger geographic clusters called sales territories. Sales resource allocation refers to the problem of allocating scarce salesman time to the aligned sales coverage units. All four subproblems have to be resolved in order to maximize profit of the selling organization. In this paper a novel nonlinear mixed-integer programming model is formulated which covers all four subproblems simultaneously. For the solution of the model we present approximation methods capable of solving large-scale, real-world instances. The methods, which provide lower bounds for the optimal objective function value, are benchmarked against upper bounds. On average the solution gap, i.e., the difference between upper and lower bounds, is about 3%. Furthermore, we show how the methods can be used to analyze various problem settings of practical relevance. Finally, an application in the beverage industry is presented. | Introduction
In many selling organizations, sales force deployment is a key means by which sales management
can improve profit. In general, sales force deployment is complicated and has attracted
much analytical study. It involves the concurrent resolution of four interrelated subproblems:
sizing the sales force, salesman 1 location, sales territory alignment, and sales resource al-
location. Sizing the sales force advocates selecting the appropriate number of salesmen. The
salesman location aspect of the problem involves determining the location of each salesman
in one of the available sales coverage units (SCUs). Sales territory alignment may be viewed
as the problem of grouping SCUs into larger geographic clusters called sales territories. Sales
resource allocation refers to the problem of allocating salesman time to the assigned SCUs.
Research has yielded some models and methods that can be helpful to sales managers.
The choice of the SCUs depends upon the specific application. SCUs are usually defined
in terms of a meaningful sales force planning unit for which the required data can be ob-
tained. Counties, zip codes, and company trading areas are some examples of SCUs (cp. e.g.
Zoltners and Sinha 1983 and Churchill et al. 1993). Note, it is more meaningful to
work with aggregated sales response functions on the level of SCUs rather than with individual
accounts because then substantially less response functions have to be estimated and
the model size does not explode (cp. e.g. Skiera and Albers 1996).
In literature, a large variety of different approaches are labeled with general terms like
'territory design', `resource allocation' or 'distribution of effort'. Frequently, from a modelling
point of view the multiple-choice knapsack problem is the matter of concern. This knapsack
model covers several important practical settings and - what has been a driving source for
its repeated use - can be solved very efficiently (cp. e.g. Sinha and Zoltners 1979). As
already mentioned 'resource allocation' addresses the question: How much of the available
time should each salesman allocate to the SCUs which are assigned to him?
Early work in this area has been published by e.g. Layton (1968), Hess and Samuels
(1971), Parasuraman and Day (1977), and Ryans and Weinberg (1979), respectively.
Waid et al. (1956) present a case study where the allocation of sales effort in the lamp
division of General Electric is investigated. Fleischmann and Paraschis (1988) study the
1 Note, we avoid the term 'salesperson' in order to make the `his/her' distinction superfluous.
case of a German manufacturer of consumer goods. For the solution of the case problem they
employ a classical location-allocation approach.
Sales resource allocation models consist of several basic components, i.e. sales resources,
sales entities, and sales response functions, respectively. As discussed in, e.g., Zoltners
and Sinha (1980) and Albers (1989), specific definitions for these components render
numerous specific sales resource allocation models. Beswick and Cravens (1977) discuss
a multistage decision model which treats the sales force decision area (allocating sales effort
to customers, designing sales territories, managing sales force, etc.) as an aggregate decision
process consisting of a series of interrelated stages.
The sales force sizing subproblem has been addressed by e.g. Beswick and Cravens
(1977) and Lodish (1980). The sales resource allocation subproblem has been analyzed,
among others, by Lodish (1971), Montgomery et al. (1971), Beswick (1977) and Zolt-
ners et al. (1979). Tapiero and Farley (1975) study temporal effects of alternative procedures
for controlling sales force effort. LaForge and Cravens (1985) discuss empirical
and judgement-based models for resource allocation. Allocation of selling effort via contingency
analysis is investigated by LaForge et al. (1986). The impact of resource allocation
rules on marketing investment-decisions is studied by Mantrala et al. (1992).
Among the four interrelated subproblems, so far the alignment subproblem has attracted
the most attention. For it, several approaches appeared in the literature. These approaches
can be divided between those which depend upon heuristics and those which utilize a mathematical
programming model. Heuristics have been proposed, among others, by Easingwood
(1973), and Heschel (1977). Two types of mathematical programming approaches have
been developed. Shanker et al. (1975) formulated a set-partitioning model. Alternatively,
the models of Lodish (1975), Hess and Samuels (1971), Segal and Weinberger (1977),
Zoltners (1976), and Zoltners and Sinha (1983) are SCU-assignment models. For an
overview see Howick and Pidd (1990).
Some of the papers published so far on the alignment subproblem aimed at aligning sales
territories in a way almost balancing with respect to one or several attributes. The most
popular balancing attributes are sales potential or workload of the salesmen. A detailed
discussion of the shortcomings of the balancing approaches can be found in Skiera and
Albers (1996) and Skiera (1996).
Glaze and Weinberg (1979) address the three subproblems of locating the salesmen, aligning
accounts and allocating calling time. More specific, they present the procedure TAPS
which seeks to maximize sales for a given salesforce size while attempting to achieve equal
workload between salespersons also and in addition minimizing total travel time.
Recently, Skiera and Albers (1994), (1996) and Skiera (1996) formulated a conceptual
model which addresses both the sales territory alignment and the sales resource
allocation problems simultaneously. Conceptual means that the sales territory connectivity
requirement is formulated verbally, but not in terms of a mathematical programming for-
mulation. For the solution of their model they propose a simulated annealing heuristic. The
objective of their model is to align SCUs and to allocate resources in such a way that sales
are maximized.
The remainder of the paper is structured as follows: In x2 the problem setting under consideration
is described as a nonlinear mixed-integer programming model. A fast method for
solving large-scale problem instances approximately is presented in x3. The results of an in-depth
experimental study are covered by x4. x5 discusses insights for marketing management.
A summary and conclusions are given in x6.
Programming Model
The larger the size of the sales force the more customers can be visited which in turn has a
positive impact on sales. On the other hand increasing the sales force size tends to increase
the operational costs per period. In addition, the number of possible calls to customers, the
operational costs and the salesmen's resource (time) which might be allocated to customers
is affected by the location of the salesmen, too. To make things even more complicated, the
alignment decision is very important for all these issues as well. Clearly, we have to take
care of all the mutual interactions of the different factors affecting the quality of the overall
sales force deployment. The aim of what follows is to provide a formal model which relates
all the issues to each other.
Let us assume that the overall sales territory has already been partitioned in a set of J
SCUs. The SCUs have to be grouped into pairwise disjoint sales territories (clusters) in such
a way that each SCU j 2 J is assigned to exactly one cluster and that the SCUs of each
cluster are connected. In each cluster a salesman has to be located in one of the assigned
SCUs, called sales territory center. Note, connected means that we can 'walk' from a location
to each assigned SCU without crossing another sales territory. I ' J denotes the subset of
SCUs which are potential sales territory centers. To simplify notation i 2 I denotes both the
sales territory center i and the salesman located in SCU i.
In practice, selling time consists of both the calling time and the travel time. For notational
purposes let denote z i;j the calling time per period which is spent by salesman i to visit
customers in SCU j. Further, assume b j 2 [0; 1] to denote the calling time elasticity of SCU
scaling parameter. Then
defines expected sales S i;j as a function of the time to visit customers. More
precisely, equation (1) relates z i;j and S i;j for all sales territories i 2 I and SCUs j 2 J .
Hence, via b j it is possible to take care of the fact that firm's competitive edge might be
different in different SCUs. Note, expected sales are defined via concave rather than s-shaped
functions, as is assumed to be the case with individual accounts (cp. Mantrala et al. 1992).
Let denote t i;j the selling time of salesman i 2 I in SCU j 2 J . Note, t i;j includes the time
to travel from SCU i to SCU j, the time to travel to customers in SCU j and the customer
calling time, respectively. Then, p relates the calling time z i;j to the selling t i;j .
Substituting z i;j in equation (1) by p i;j t i;j yields
Note, equation (2) has first been proposed by Skiera and Albers (1994). In equation (2)
the parameter
is introduced. c i;j measures the sales contribution when SCU j is part of sales territory i
where c i;j is a function of p i;j . This is best illustrated as follows: Suppose that for salesman i
the travel times to customers in SCUs j and k are different. Then in general p i;j and p i;k will
be different also. Clearly, this produces different parameters c i;j and c i;k - and puts emphasis
on the location decision.
Now we are ready to state the model formally. We summarize the model parameters
J set of SCUs, indexed by j
I set of SCUs (I ' J) for locating salesmen, indexed by i
set of SCUs which are adjacent to SCU j
fixed cost for locating a salesman in SCU i 2 I
c i;j expected sales if SCU j 2 J is covered by the salesman located in i 2 I
selling time available per period for salesman i 2 I
introduce the decision variables
x i;j =1, if SCU j 2 J is assigned to the salesman located in SCU i 2 I
t i;j selling time allocated by the salesman located in SCU i 2 I to SCU
and then formulate an integrated model for sales force sizing, salesman location, sales territory
alignment, and sales resource allocation as follows:
maximize
i2I
i2I
subject to
i2I
Objective (4) maximizes sales while taking fixed cost of the salesman locations into account
- and hence maximizes profit contribution or profit for short. The salesman i is allowed
to allocate selling time to SCU j only when SCU j is assigned to him (cp. equation (5)).
Equation (6) guarantees that the maximum workload per period (consisting of travel and
call time) and salesman is regarded. Equation (7) assigns each SCU to exactly one of the
salesmen. Equation (8) guarantees that all the SCUs assigned to one sales territory are
connected with each other. Note that these equations work similar to constraints destroying
short cycles in traveling salesman model formulations (an example can be found in Haase
1996). Clearly, it would be sufficient to take care of connected subsets
of SCUs only. Equations (9) and (10) define the decision variables appropriately.
So far we did not mention the following assumption which is covered by our model:
means that SCU i is assigned to the salesman located in sales territory i. In other
words, x does not only tell us where to locate salesman i, it also defines how to
align SCU i. This assumption is justified with respect to practice. Moreover, we assume by
definition of the binary alignment variables x i;j 2 f0; 1g that accounts are exclusively assigned
to individual salesman. Note, this is an assumption in marketing science and marketing
management because of several appealing reasons.
The model (4) to (10) has linear constraints, but a nonlinear objective. Furthermore, we
have continuous and binary decision variables. Therefore, there is no chance to solve this
model with standard solvers. In Haase and Drexl (1996) it is shown how the objective
function can be linearized in order to make the model accessible to mixed-integer programming
solvers (cp. Bradley et al. 1977 also where it is shown how to approximate nonlinear
functions by piece-wice linear ones). This makes it possible to compute upper bounds for
medium-sized problem instances which in turn facilitates to evaluate the performance of the
heuristics.
Clearly, all the parameters of the sales response function (1) have to be estimated. This
can be done as follows if a sales territory alignment already exists since several periods, i.e.
if our concern is to rearrange an already existing sales territory alignment. Then information
for each SCU about the sales, the time to travel to customers as well as the time to visit
the customers is already available. Usually, these informations can be extracted from sales
reports. In this situation b j and g j can be estimated as follows. Transform equation (1) to
equation
and then calculate estimates of b j and g j via linear regression. Finally, for the computation
of c i;j we need estimates of p i;j . In this regard the time to travel from SCU i to SCU j and
the time to visit the customers within SCU j are required. If salesman i has already covered
SCU j in the past we just have to look at his sales reports. Otherwise, we assume that the
time to travel within an SCU is independent from the salesman. Then the only information
required for a salesman k 6= i is the time to travel from k to i. This is easily available e.g.
from commercial databases or simply by assuming that the travel time is proportional to the
travel distance. In the case where the sales territory has to be designed from scratch, more
efforts are neccessary. Unfortunately, going into details is beyond the scope of this paper.
The four interrelated subproblems are addressed in our model by the decision variables
x i;j and t i;j . Let denote x
i;j and t
i;j an optimal solution for a given problem instance:
ffl Apparently, the optimal size of the sales force jIj which corresponds to the optimal number
of sales territories (clusters) is given by the cardinality of the set I = fi j x
1g.
ffl For each of the sales territories in the set I the SCU i with x
is the optimal location
of the salesman, i.e. the optimal sales territory center.
ffl For each sales territory i 2 I the optimal set J i of aligned sales territories or SCUs is
given by J
1g.
Finally, t
is the optimal sales resource allocation for
This interpretation of an optimal solution x
i;j and t
i;j illustrates that the model is 'scarce'
in the sense that two types of decision variables cover all the four subproblems of interest.
This suggests that in fact that the model is a suitable representation of the overall decision
problem. Moreover, it comprises the first step towards a solution of the problem.
The aim of the following section is to present heuristic methods which balance computational
tractability with optimality.
3 Approximation Methods
This section discusses a solution approach which has been developed specifically for the
model. Two reasons led to this development. First, standard methods of mixed-integer programming
seem to lend themselves to solving the linearized version of the model. However,
even for modestly sized problems the formulation translates into very large mixed-integer
programs which in turn result in prohibitive running times (for details see Haase and
Drexl 1996). In fact it is conjectured that - except for smaller problem sizes - no exact
algorithm will generally produce optimal solutions in a reasonable amount of time. Second,
apart from exact methods, so far no heuristic is available for solving the model. The simulated
annealing procedure of Skiera and Albers (1994), (1996) and Skiera (1996) solves
two of our subproblems, i.e. the sales territory alignment and the sales resource allocation.
Unfortunately, it does not tackle the sales force sizing and the salesman location subprob-
lems. In addition, although dealing only with two of the four subproblems in general the
running times of the simulated annealing procedure do not allow to solve large-scale problem
instances in a reasonable amount of time.
Our heuristic may be characterized as a construction and improvement approach. It
consists of the Procedure Construct and the Procedure Improve.
ffl The Procedure Construct determines the sales force size and hence the number of
salesman. In addition, it calls two other procedures: The Procedure Locate which
computes the SCU in which each salesman has to be located and the Procedure Align
which aligns the SCUs to the already existing sales territory centers.
ffl The Procedure Improve systematically interchanges adjacent SCUs of two different
clusters. This way it improves the feasible solution which is the outcome of the Procedure
Construct.
Note that the sales resource allocation subproblem can be solved as soon as all sales
territories are aligned by equation (13) or equation (14). Now, first we describe the procedures
designed to generate feasible solutions followed by the description of equations (13) and (14).
Then the improvement procedure will be presented.
3.1 Compute Feasible Solution
Recall J to denote the set of SCUs, I to be the set of SCUs which are potential
locations, and N j to denote the set of SCUs which are adjacent to SCU j, respectively. In
addition, let denote
S the minimum number of sales territory centers which might be established (S -
S the maximumnumber of sales territory centers which might be established (S - jIj)
s the 'current' number of sales territory centers (S - s - S)
I 1 the set of selected locations (jI
I 0 the set of non-selected locations
locations (i.e. SCUs) of the sales territory centers i 2 I 1
j(i) the SCU j where sales territory center i 2 I 1 is located in
center i to which SCU j is assigned to
J 0 the set of SCUs which are not yet aligned (initially J
J i the set of SCUs which are aligned to sales territory center i 2 I 1
sum of sales contributions of location
LB a lower bound on the optimal objective function value
Based on these definitions the set A i of SCUs which might be aligned to sales territory center
i may be formalized according to equation (12).
Note that the number of sales territory centers equals the number of salesmen (i.e. the sales
force size) which in turn equals the number of locations. Therefore, some of the newly introduced
parameters are superfluous, but this redundancy will be helpful for the description of
the procedures.
In the sequel Z will denote the objective function value of a feasible solution at hand.
Clearly, Z is a function of the decision variables x i;j and t i;j . The algorithms do not operate
on the set of x i;j variables, only the t i;j variables will be used directly. In what follows it is
more convenient to express the x i;j decisions partly also in terms of the number of salesman s,
and in terms of L(I 1
respectively. Redundancy will simplify the formal description and ease
understanding substantially. With respect to this redundancy Z(:::) will be used in different
variants, but from the local context it will be evident what it stands for.
We introduce a global variable lose[h; i]; h 2 I; i 2 I, which is used for locating the
salesmen in the set of potential locations. The variable lose[h; i] is a means for selecting some
elements of a probably large set I quickly. The meaning of lose[h; i] will be explained below
in more detail.
An overall description of the Procedure Construct is given in Table 1. Some comments
shall be given as follows: The Procedure Construct just consists of an overall loop which
updates the current number s of salesmen under consideration. Then it passes calls to Procedure
Locate and to Procedure Align and afterwards evaluates the resource allocation by
equation (13) or (14). Finally, the objective function values Z(s; x; t) and Z(s are
compared with the best known lower bound LB which is updated whenever possible. Note,
the number of salesman s for which search is performed is - without loss of generality -
restricted to the interval S - S.
Table
1. Procedure Construct
Initialize
call Procedure Locate
call Procedure Align (L(I1))
evaluate resources allocation by equation (13) or (14)
When a call to Procedure Locate is passed we start with jI I 1 which
implies I 1 " I I and initialize L(I 1 ). Note that the Procedure Locate uses
as calling parameter only the current number s of locations. A description of the Procedure
Locate is given in Table 2.
Table
2. Procedure Locate
Initialize
WHILE improve DO
update I0 and L(I1)
lose[ik
In the Procedure Locate the for-loop tells us that as starting locations L(I 1 ) the 'first'
j elements of the set I of potential locations are chosen. The procedure stops when within
the for-loop no further improvement of the set of locations can be found. As an outcome we
know the locations L(I 1 ) of the current number s of salesman.
Capitalizing on the definitions given above a compact description of the Procedure Align
is given in Table 3. Within the while-loop one of the not yet aligned SCUs is chosen and
aligned to one of the already existing sales territory centers. The criterion for chosing SCU
h and sales territory center i is motivated below.
Table
3. Procedure Align (L(I1))
Initialize
compute (h; i) such that c h;i =C
update A i
Apparently, as a final step of the overall Procedure Construct the sales resource allocation
subproblem has to be solved. This is done by evaluating equations (13) or (14), where a =
in the case of b
In the general case where b h allocation is done by equation (14).
are used for short, where fi i is the the 'average' elasticity
which has to be calculated by bisection search. Note, it is beyond the scope of this paper to
show how equation (14) can be derived and the reader is referred to Skiera and Albers
(1994).
3.2 Improve Feasible Solution
In general feasible solutions at hand can easily be improved by the following simple Procedure
Improve. For a compact description of the procedure we define two boolean parameters:
ae
connected
FALSE otherwise
ae
connected and j 6= j(i)
FALSE otherwise
The function add(V defines only those alignments to be feasible where we add SCU j to
the sales territory V i such that the newly derived sales territory consists of connected SCUs
only. Similarly, the function drop(V admits only alignments to be feasible where we drop
running into disconnectedness. In other words:
Both functions define those moves of an SCU j to/from a sales territory V i to be feasible
where the outcome does not violate the connectivity requirement. As a consequence, only
those SCUs are suspected move candidates which are located on the border of each of the sales
territories. In this respect the functions add(V are complementary. As a
consequence the Procedure Improve might be characterized as an interchange method, too.
Note that 'add' and `drop' are used in discrete location theory also (cp. e.g. Mirchandani
and Francis 1990 and Francis et al. 1992). Clearly, the resource allocation t i;j has to be
updated with respect to each move by evaluating equation (13) or (14).
A formal description of the Procedure Improve is given in Table 4. For the sake of com-
pactness, the calling parameter denotes the vector of sales territory alignments
currently under investigation and Z(V ) the corresponding objective function value,
respectively. us that the objective function value has to be computed
with respect to the current alignment under investigation where SCU j is subtracted
from sales territory V i(j) while sales territory V i is augmented by SCU j. Clearly, the computation
of the objective function requires an update of the resource allocation t i;j via equation
or (14) also.
Table
4. Procedure Improve (V )
Initialize improve = TRUE,
WHILE improve DO
Finally, we shall explain in more detail how the different procedures work and further
motivate why they are constructed the way they are:
ffl First, without any formal treatment we start off with the observation that - for 'reason-
able' parameters c i;j and f i - the objective function is concave with respect to the sales
force size, i.e. the number of salesman. Therefore, 'gradient search' within the interval
is implemeted in the Procedure Construct.
ffl Second, the global variable lose[h; i] is used in the Procedure Locate like in tournament
selection. The tournament is finished when the 'best' player h (i.e. the one which so
far has lost the least number of games) does not win against any other player k 2 I 1 .
As already mentioned above this is an effective means for selecting some elements of a
probably large set quickly.
ffl Third, the Procedure Align is greedy in the sense that the steepest ascent of the objective
function is used as criterion for the choice of the next SCU to be aligned. More precisely,
the choice depends on the ratios c h;i =C i , i.e. the rational is to take care of the relative
weights of the expected sales contributions.
ffl Fourth, the Procedure Improve belongs to the variety of local search methods (for a
survey of advanced local search methods cp. e.g. Pesch 1994). In order to keep our
explanations as simple as possible we distract our attention from the resource allocation
t i;j . Starting with an incumbent sales territory alignment we search all its
neighboors - x 2 H(x), where H(x) equals the set of feasible solutions which are properly
defined by the functions add(V called neighborhood of x.
Searching over all neighboors -
in a steepest ascent manner may be characterized
as a 'best fit strategy'. By constrast, a `first fit strategy' might by less time consuming
while presumably producing inferior results.
ffl Fifth, the Procedures Construct and Improve comprise deterministic methods. In the
next section we will show that these simple deterministic methods produce already very
promising results. Therefore, there is no necessity to make the methods more sophisticated
(and more complicated) by incorporating either self-adaptive randomization concepts
(cp. e.g. Kolisch and Drexl 1996) or procedure parameter control techniques
adopted from sequential analysis (cp. e.g. Drexl and Haase 1996). Furthermore, if
desired it is straightforward to incorporate simulated annealing randomization schemes
(for a comprehensive introduction into the theory and techniques of simulated annealing
cp. e.g. Johnson et al. 1989, 1991).
Finally, when solving difficult combinatorial optimization problems one is likely going to
be trapped in local optima when searching greedily in a steepest ascent manner only.
Therefore, numerous researchers have devised (less greedy) steepest ascent/mildest descent
procedures which provide the ability to escape from local optima while avoiding
cycling through setting some moves 'tabu' (for a comprehensive introduction into the
theory and techniques of tabu search see e.g. Glover 1989, 1990). While, clearly, there
might be some potential for improvement there seems to be no necessity in this respect
to incorporate tabu search techniques.
4 Experimental Evaluation
The outline of this section is as follows: First, we elaborate on the instances which are used
in our computational study. Second, we describe how to compute benchmark solutions in
order to judge the performance of the methods presented in the preceeding section. Third,
numerical results will be presented.
Even in current literature, the systematic generation of test instances does not receive
much attention. Generally, two possible approaches can be found adopted in literature when
having to come up with test instances. First, practical cases. Their strength is their high
practical relevance while the obvious drawback is the absence of any systematic structure
allowing to infer any general properties. Thus, even if an algorithm performs good on some
practice cases, it is not guaranteed that it will continue to do so on other instances as
well. Second, artificial instances. Since they are generated randomly according to predefined
specifications, their plus lies in the fact that fitting them to certain requirements such as
given probability distributions poses no problems. detailed such procedure for generating
project scheduling instances has been recently proposed by Kolisch et al. 1995). However,
they may reflect situations with little or no resemblance to any problem setting of practical
interest. Hence, an algorithm performing well on several such artificial instances may or may
not perform satisfactorily in practice. Therefore, we decided to devise a combination of both
approaches, thereby attempting to keep the strenghts of both approaches while avoiding their
drawbacks.
4.1 Practical Case
First, we used the data of a case study which have been compiled by Skiera (1996) in order
to evaluate his simulated annealing procedure. This instances are roughly characterized as
follows: The company is located in the northern part of Germany. The sales region covers
the whole area of Germany. The sales territory is partitioned into 95 SCUs (two-digit postal
areas). The number of salesman employed is ten where the location of each salesman, i.e. the
sales territory center is assumed to be fixed. Then the sales force sizing and the salesman
location subproblems are (presumed to be) of no relevance. For the remaining two subproblems
the solutions currently used by the company and the solution computed by Skiera
are available as a point of reference for our procedure. While, clearly, all the data
available are of great practical interest we refrain, however, from the tedious task of citing
all the respective details. 2
4.2 Generation of Instances
Second, we generated instances at random. We assumed that only two instance-related factors
do have a major impact on the performance of the algorithms, viz. the cardinality of the set
I of potential sales territory centers and the cardinality of the set J of SCUs, respectively.
Both factors relate to the 'size' of a problem, hence (I; J) denotes the size of an instance.
When generating instances at random a critical part is the specification of a connected
sales territory. In order to do so we employ the Procedure Generate which is able to generate
a wide range of potential sales territories while preserving connectivity. The basic idea is to
define a set squares located on a grid.
For every unit square (ff; fi) 2 K the set of adjacent unit squares N (ff;fi) or neighbours is
defined as follows:
The Procedure Generate is formally described in Table 5. As calling parameters the set of
sales territory centers I and the set of SCUs J are used. Note that - starting with the 'central'
- the set M is incremented until it equals the set of SCUs J
which have to be generated while preserving connectivity of the sales territory. Similar to the
Procedure Align A denotes the set of those unit squares of the grid which are candidates to
be aligned to the already generated sales territory. In a last step the set of sales territories
I is chosen at random.
Table
5. Procedure Generate (I; J)
Initialize
chose (ff; fi) 2 A at random
chose I ' J sales territories at random
It is easy to verify that the Procedure Generate is capable to produce a large range of
quite different shaped sales territories. Nevertheless, the question is whether this construction
process which basically relies on unit squares and hence on SCUs of equal size does produce
instances which are meaningful for the methods to be evaluated? The answer is 'yes' because
the grouping, i.e. building of larger units is just what the Procedure Align does.
Summarizing the instances treated in the computational study are characterized as follows
ffl The set of SCUs J is given by f50, 100, 250, 500g.
ffl The set of potential sales territory centers I is given by f10, 25, 50g.
ffl The scaling parameter g j is chosen at random out of the interval [10, 210].
ffl The expected sales c i;j equal where the distances d i;j are computed as follows:
ae
2 All the instances used in this study are available on our ftp-site under the path /pub/operations-
research/salesforce via anonymous ftp.
b was set to 0.3 with respect to empirical findings of Albers and Krafft (1992). As a
consequence because of travel times being proportional to travel distances d i;j expected
sales c i;j decrease the longer the distance between i and j is and vice versa.
ffl The fixed cost f i of sales territory centers are drawn at random out of the interval [750,
1,250].
ffl The maximum workload T i per period and salesman is set to 1,300 for all i 2 I. This is
an estimate of the annual average time salesman in Germany have to work (cp. Skiera
and Albers 1994).
ffl The lower bound S for the number of sales territory centers is set to 0 while the upper
bound S equals jIj.
Note, to calculate the scaling parameter g j at random as described above might not be
the best choice whenever the data are spatially autocorrelated. While, certainly, it is not that
difficult to generalize the generator such that autocorrelation is covered also we do not follow
these lines here because of the following reason: The practical case described in Subsection
4.1 has spatially autocorrelated data. Solving the practical case with our procedures is by no
means more difficult than solving the artificial instances (details are provided below). Hence,
we refrain from introducing some more parameters in order to get a more 'realistic' instance
generator.
Clearly, only 'reasonable' combinations of J and I are taken into account (details are
provided below). In addition, due to the computational effort required to attempt all the
sizes only ten instances were considered in the experiment for each instance class (J; I).
4.3 Computation of Benchmarks
Unfortunately, it is not possible to solve the nonlinear mixed-integer programming (NLP-)
model (4) to (10) by the use of a 'standard' solver. Hence, even for small-sized problem
instances there is no 'direct' way to get benchmarks. Consequently, in a companion paper
(cp. Haase and Drexl 1996) the model (4) to (10) has been reformulated as a mixed-integer
linear programming (MIP-) model. In order to do so one has to replace the nonlinear
objective by a piecewise-linear one such that an optimal solution of the MIP-model provides
a lower bound for the NLP-model. Clearly, solving the LP-relaxation of the MIP-model yields
an upper bound of the optimal objective of the NLP-model and, hence, benchmarks.
The LP-relaxation of the MIP-model can be solved directly by the use of one of the
commercially available LP-solvers. This way it is possible to compute upper bounds for
problems having up to territories in a reasonable amount
of time. A more efficient approach uses the MIP-model within a set partitioning/column
generation framework. Going into details is beyond the scope of this paper and the interested
reader is referred to Haase and Drexl (1996).
4.4 Computational Results
The algorithms have been coded in C and implemented on a 133 Mhz Pentium machine
under the operating system Linux. The parameter K of the Procedure Generate is defined
to be used. Note, FAC ? 1 serves to generate
sales territories where not all units form part of the overall sales region, i.e. lakes and other
'non-selling' regions can be included also.
Table
6 provides a comparison of lower and upper bounds. Columns 1 and 2 characterize
the instance class, i.e. problem size under consideration in each row in terms of jJ j and jIj,
respectively. Columns 3 and 4 report the results which have been obtained using the LP-solver
of CPLEX (cp. CPLEX 1995). More specific, column 3 provides the average upper bound
UB which has been obtained by solving the LP-relaxation of the linearized version. Column
4 shows the average CPU-time in sec required to compute UB. Recall that averages over ten
instances for each row, i.e. instance class (J; I), are provided. Columns 5 to 7 with the header
Table
6. Comparison of Lower and Upper Bounds
CPLEX CONIMP
50 13,271.83 49.20 12,736.17 2.60 4.04
50 29,583.04 172.60 28,464.76 13.50 3.73
500 50 133,702.41 3,424.34 130,962.26 626.70 2.05
CONIMP present the results of the Procedures Construct and Improve. More specific, LB
cites the average best feasible solution, i.e. lower bound computed. CPU denotes the average
CPU-time in sec required by the algorithms to compute LB. - 1 denotes that the average
is only an ffl above zero sec. Finally,
UP
measures the average percentage
deviation between upper and lower bound, i.e. the solution gap. Note, GAP covers both
the tightness of the LP-relaxation and the deviations of the lower bounds obtained from the
optimal objective function values. On the average, the solution gap roughly equals
3%. Hence, the feasible solution computed indead must be very close to the optimal one.
Table
7. Comparison of the Procedures Construct and Improve
CON IMP
50 28,416.35 13.40 28,464.76 0.10 0.17
500 50 129,746.85 544.70 130,962.26 82.00 0.94
Now the question shall be answered which of the Procedures Construct or Improve contributes
to which extent to the fact that the lower bounds are very close to the optimum.
Table
7 gives an answer. The header CON groups the information provided with respect
to the Procedure Construct while the header IMP does so for the Procedure Improve. In
the former case LB denotes the lower bound obtained while in the latter one it shows the
additional improvement. In both cases CPU denotes the required CPU-time in sec. AP I
provides the average percentage improvement.
It has already been mentioned that our model is more general than the one of Skiera and
Albers (1996) because it covers the sales force sizing and the salesman location subproblems
also. Consequently, our methods cover the more general case, too. Surprisingly, although
being more general, our methods are more efficient than the simulated annealing method
of Skiera and Albers. While our algorithms solve the practical case close to optimality
in a CPU-time - 1 sec, the simulated annealing method requires up to ten minutes on a
80486 DX-33 machine to do so. Moreover, the solution computed by our algorithms is slightly
better than the one found by the simulated annealing method. Note, suboptimality means
that profit increase is about 5% compared with the alignment used by the company so far.
Clearly, the run-times of the simulated annealing algorithm will become prohibitive when
applied to large-scale problem instances.
Regarding the results reported in Tables 6 and 7 some important facts should be emphasized
Roughly speaking, the solution gap decreases from 4% to 2% while the size of the instance
increases, because of two reasons. First, relaxing the connectivity requirements makes
the LP-bounds for small problem instances weak compared to large ones. Second, the
quality of the piece-wise linear approximation increases with increasing problem size and
hence makes the LP-bounds more tight.
ffl The larger the cardinality of the set I the more time has to be spent in evaluating the
size and the location of the sales force. Clearly, this takes the more CPU-time the larger
I is in relation to J . From another point of view, if there is no degree of freedom with
respect to the size of the sales force and the location of the salesmen, i.e. then the
alignment and the allocation subproblems are solved very effective and very efficient by
our algorithms also. This decidedly underlines the superiority of our approach compared
to the one of Skiera and Albers (1996).
ffl In general, the quality of the solutions computed by the procedure Construct is already
that good that only minor improvements can be obtained subsequently. In other words,
exploiting the degree of freedom on the level of the sizing and the locating decisions
appropriately already gives an overall sales force deployment which is hardly to improve
by realigning some of the SCUs.
The scope of the experiment conducted so far was to show how good our algorithms
work. Seriously this can only be done with respect to the optimal objective function or at
least an upper bound. Therefore, the experiment was limited to include only instances of the
size for which the LP-relaxation of the MIP-model can be solved in reasonable time. Clearly,
there is no obstacle for using the algorithms on larger instances which might become relevant
e.g. in a global marketing context. The CPU-times required by our procedure show that for
really huge instances comprising thousands of SCUs it is possible to compute near-optimal
solutions within some hours of computation. Summarizing there is no obstacle for using the
algorithms even on very large instances.
5 Insights for Marketing Management
In what follows we will discuss managerial implications of our findings. More precisely, we
will state some insights and subsequently assess their validity on basis of experiments.
Insight 1: The results are robust with respect to wrong estimates of parameters.
In order to evaluate insight 1 we took one of the instances with jJ
potential locations. Now assume that b 0:3 and the c i;j which are generated along the
lines described in Subsection 4.2 8j 2 J and i 2 I are the (unknown, but) 'true' values of
the parameters of the sales response function. The parameters - b and - c i;j which are used in
the experiment are then generated via data perturbation as follows: Calculate -
and choose - c i;j 2 [c i;j at random where \Deltac and \Deltab are perturbation
control parameters.
Table
8 presents the results of this study. Across rows and columns we provide the percentage
decrease
OPT
\Delta 100 of profit where OPT denotes the 'optimal' objective
value which has been calculated based upon the 'true' parameter values while ACT
is the one which has been computed with respect to the perturbed parameters. The results
show that even in the case when the parameters are estimated very 'bad' (i.e. all of them are
under- or overestimated drastically) the percentage decrease of profit does hardly exceed 3%.
Table
8. Robustness of the Model
\Deltab -0.10 -0.05 0.00
Insight 2: Profit is not that sensitive with respect to sales force size.
In order to evaluate insight 2 once more we took the instance with jJ
potential locations. Then, the size of the sales force was set to the levels 29, 30,
by fixing accordingly. Table 9 provides the results of this experiment.
OFV denotes the objective function value (normalized to the interval [0; 1]) which has
been computed by our methods with respect to the size s. The results which are typical for
various other experiments not documented here support insight 2 which means that the objective
function is fairly flat near the optimum number of salesmen. Hence, the 'flat maximum
principle' (cp. Chintagunta 1993) is valid also in this context.
Table
9. Profit as Function of Sales Force Size
s OFV
29 0.99577
Insight 3: Profit is sensitive with respect to the location of the salesmen.
Once more we relate to the instance already used twice. Table 10 provides part of the protocol
of a run. More precisely, the outcome of some typical iterations of the Procedure Locate
where potential locations are evaluated systematically is given in terms of normalized objective
function values OFV (s). Similar to Table 9 the size of the sales force is fixed in each row.
Clearly, the process converges to the best found objective function value (hence, OFV
1 in column seven), but the values go up and down depending on the specific old and new
locations under investigation. Hence, the 'flat maximum principle' is not valid with respect
to the location of the salesmen.
Table
10. Profit as Function of Location of Salesmen
s OFV Iterations
29 0.934 0.960 0.947 0.973 0.986 1.000
The insights evaluated in Tables 8, 9 and 10 can be summarized as follows:
ffl For reasonable problem parameters the size of the sales force does not affect firm's profit
that much.
ffl The location of the salesmen in general will affect firm's profit drastically. Consequently,
existing alternatives must be evaluated.
ffl Fortunately, the model is very robust with respect to the estimation of the parameters of
the sales response function. Even in the case when there is a systematic estimation bias
(over- or underestimation of all the parameters) the decision is not that bad in terms
of firm's profit. Usually, there is no systematic bias, hence, the sales force deployment
evaluated by the algorithms will be superb.
6 Summary and Conclusions
In this paper it is shown how four interrelated sales force deployment subproblems can be
modelled and solved simultaneously. These subproblems are: sizing the sales force, salesman
location, sales territory alignment, and sales resource allocation. More specific an integrated
nonlinear mixed-integer programming model is formulated. For the solution of the model we
present a newly developed effective and efficient approximation method.
The methods are evaluated on two sets of instances. The first one stems from a case study
while the second one is based on the systematic generation of a representative set of problem
instances covering all problem parameters at hand. The results show that the method allows
to solve large-scale instances close to optimality very fast.
The methods which provide lower bounds for the optimal objective function value are
benchmarked against upper bounds. On the average the solution gap, i.e. difference between
upper and lower bound, is roughly 3%. Furthermore, it is shown, how the methods can be
used to analyze various problem settings which are of highly practical relevance. Hence,
the methods presented in this paper are effective and efficient and will be very helpful for
marketing management.
--R
Entscheidungshilfen f?
"Steuerungssysteme f?r den Verkaufsau-endienst"
"Allocating selling effort via dynamic programming"
"A multistage decision model for salesforce manage- ment"
Applied Mathematical Programming.
"Investigating the sensitivity of equilibrium profits to adverstising dynamics and competitive effects"
Sales Force Management.
CPLEX Inc.
"Sequential-analysis based randomized-regret-methods for lot-sizing and scheduling"
"A heuristic approach to selecting sales regions and territories"
"Solving a large scale districting problem: a case report"
Facility Layout and Location - An Analytical Approach
"A sales territory alignment program and account planning system (TAPS)"
search - Part I"
search - Part II"
"Deckungsbeitragsorientierte Verkaufsgebietseinteilung und Standortplanung f?r Au-endienstmitarbeiter"
"Sales force deployment by mathematical programming"
"Effective sales territory development"
"Experiences with a sales districting model: criteria and implementation"
"Sales force deployment models"
"Optimization by simulated annealing: an experimental evaluation - Part I: graph partitioning"
"Optimization by simulated annealing: an experimental evaluation - Part II: graph colouring and number partitioning"
"Adaptive search for solving hard project scheduling problems"
"Characterization and generation of a general class of resource-constrained project scheduling problems"
"Empirical and judgement-based sales-force decision models: a comparative analysis"
"Using contingency analysis to select selling effort allocation methods"
"Sales territory alignment to maximize profit"
"A user-oriented model for sales force size, product, and market allocation decisions"
"Impact of resource allocation rules on marketing investement-level decisions and profitability"
Learning in Automated Manufacturing - A Local Search Approach
"Territory sales response"
"Turfing"
"Sales territory design: an integrated approach"
"The multiple-choice knapsack problem"
"Verkaufsgebietseinteilung zur Maximierung des Deckungsbeitrages"
"COSTA: Ein Entscheidungs-Unterst-utzungs-System zur deckungsbeitragsmaximalen Einteilung von Verkaufsgebieten"
"COSTA: Contribution optimizing sales territory alignment"
"Integer programming models for sales territory alignment to maximize profit"
"Integer programming models for sales resource allocation"
"Sales territory alignment: a review and model"
--TR | marketing models;salesman location;Application/Distribution of Beverages;Sales Force Sizing;Sales Territory Alignment;Sales Resource Allocation |
334735 | Transparent replication for fault tolerance in distributed Ada 95. | In this paper we present the foundations of RAPIDS ("Replicated Ada Partitions In Distributed Systems"), an implementation of the PCS supporting the transparent replication of partitions in distributed Ada 95 using semi-active replication. The inherently non-deterministic executions of multi-tasked partitions are modeled as piecewise deterministic histories. I discuss the validity and correctness of this model of computation and show how it can be used for efficient semi-active replication. The RAPIDS prototype ensures that replicas of a partition all go through the same history and are hence consistent. | Introduction
Virtual nodes (i.e., partitions in Ada 95) in a distributed
application can be rendered fault-tolerant by replication. A
failure of a replica of a replicated partition can be masked
thanks to the remaining replicas, which ensure that the partition
remains available in spite of the failure.
Despite Ada's strong position in the development of
dependable systems, the Ada 95 language standard
addresses the issue of replication in distributed systems
only in an "implementation permission" clause:
"An implementation may allow separate copies of a
partition to be configured on different processing
nodes, and to provide appropriate interactions
between the copies to present a consistent state of the
partition to other active partitions."
With this statement, the standard argues in favor of a transparent
solution to replication: as a distributed application is
configured only after it has been programmed and com-
piled, it follows that the intent is that replication should be
offered in a way transparent to the replicas of the replicated
partition itself (replica transparency).
Note that although distribution is not quite transparent in
Ada 95, this does not imply that replication shouldn't be
either! The quoted paragraph also clearly addresses the
issue of replication transparency by stating that a replicated
partition should present a "consistent state" to other
active partitions, which I take to mean that its behavior
should be indistinguishable from that of a singleton parti-
tion. Replication is therefore also seen as transparent
towards the other active partitions in the system.
In this paper, I present some results from my work on
replication for fault tolerance in distributed Ada 95
[Wol98]. Replication in Ada 95 is complicated by the
inherent non-determinism of partitions; a simple state
machine approach [Sch90] is not applicable. I assume the
following system model:
. A distributed Ada 95 application executes on an asynchronous
distributed system (no timing assumptions,
no synchronized clocks).
. The distributed system may be heterogeneous.
. Partitions do not share memory, i.e., there are no passive
partitions in the system.
. Active partitions communicate by remote procedure
calls through reliable channels over the network.
. Partitions are subject to crash failures only.
. Partitions are multi-tasked: incoming RPCs are handled
concurrently in separate tasks.
. A view-synchronous group communication system
provides consistent membership information and various
reliable multicast primitives.
. Replicas are organized as a group.
The goal of replication is to render a partition fault-tolerant
against crash failures in a way that ideally preserves both
replica and replication transparency for the application.
The rest of this paper is structured as follows. I briefly
review the various causes for non-determinism in Ada 95
in section 2. Section 3 presents the piecewise computation
model, which abstracts from the vagaries of non-deterministic
executions. In section 4, I give brief consideration to
active and passive replication schemes, before describing
in some detail semi-active replication in section 5. Section
6 gives a brief overview of Rapids, an implementation of a
replication manager for the GNAT system using semi-active
replication.
2 Non-Determinism in Ada 95
When a deterministic partition is to be replicated, active
replication using the state machine approach of [Sch90]
can be used. Replica consistency can be ensured by fulfilling
the following two conditions:
. Atomicity: if one replica handles a request r, then all
replicas do.
. Order: all replicas handle all requests in the same
order.
This is insufficient if replicas are non-deterministic. In a
multi-tasked partition for instance, task scheduling may
well violate the order imposed on requests, leading to different
state evolutions of replicas. Instead, the sequence of
state accesses must by ordered identically on all replicas.
There are several causes of non-determinism in Ada 95.
Besides the behavior of the tasking system, e.g. in its
choices made for selective accept statements, explicit timing
dependencies such as delay statements are the most
obvious sources of non-determinism. Implicit timing
dependencies also may cause non-deterministic behavior,
e.g. the use of a pre-emptive time-sliced task scheduler or
simply different message delivery times (even if their order
is preserved) originating in network delays.
Because there is no coordinated time in a distributed
Ada 95 application, these dependencies on time make the
use of a deterministic task scheduler - i.e., one that always
makes the same decisions at corresponding task dispatching
points given the same set of tasks - impractical for
guaranteeing replica determinism. Even if explicit time
dependencies such as delays were forbidden, e.g. through
pragma Restrictions, the implicit timing differences
may cause replicas to diverge.
To overcome these difficulties, a more refined model of
concurrent executions in Ada 95 is needed. The piecewise
deterministic computation model [SY85, Eln93] views an
execution as a sequence of deterministic state intervals that
are separated by non-deterministically occurring events.
This model can be applied to Ada 95 as well. The signaling
model of the language [ISO95, 9.10(3-10)] guarantees
that all accesses to objects shared between any two
tasks actually happen in some sequential order. As a result,
shared objects must be protected using appropriate application-level
synchronization through protected objects or
rendezvous 1 . A partition is considered erroneous if it contains
two tasks that access the same unprotected object
without signaling in between.
An execution in the piecewise deterministic computation
model is characterized by the events that occur. I distinguish
two different classes of events.
Internal events account for non-determinism in the language
model. They cover all task dispatching points, signaling
actions, and events related to abort deferral:
. The choice made in a selective accept statement.
. The outcome of conditional and timed entry calls.
. Entering and leaving protected actions (locking/
unlocking protected objects).
. Queueing and requeueing on entry calls.
. Task creations, abortions, and terminations.
. Abortions in ATC.
. Initialization, finalization, and assignment of controlled
objects.
External events basically account for non-determinism in
the network:
. Delivery of an RPC request from another partition.
. Sending an RPC request to some other partition.
. Sending an RPC result back to the calling partition.
. Delivery of an RPC result from some other partition.
There are some additional internal events concerning local
calls to subprograms whose results depend upon state outside
the application, e.g. system calls like calling
Ada.Calendar.Clock.
With the piecewise deterministic computation model,
replicas can be synchronized by making sure that they all
follow the same execution history, i.e. that they all go
through the same sequence of events. Because signaling
actions and task dispatching points are subsumed by the
above list of events and any two concurrent state accesses
must be separated by a signaling action, this will ensure
that the sets of tasks and the states of all replicas are consistent
Abortions and abort-deferred regions are included
because otherwise abortions in ATC might basically cause
replicas to diverge if abortions didn't happen at precisely
the same logical moment on all replicas. Consider the
example in fig. 1.
If the assignment in to variable X is aborted, X may be
abnormal [ISO95, 9.8(21)] and hence its subsequent use is
1. There are some special cases, though. A task could e.g. write a value to
some unprotected variable and then create another task that read it.
However, task creations also are signaling actions.
erroneous [ISO95, 13.9.1]. This is even true in centralized
applications: even there, asynchronous aborts may cause
problems for the application to make sure that state
accessed in an asynchronous select statement remains
consistent in the face of abortions. The critical sections of
abortable parts (in the sense of making state modifications
that may influence the further execution of the application)
must therefore be encapsulated within abort-deferred
regions. The language defines several constructs that defer
abortion [ISO95, 9.8(6-11)], in particular, protected
actions and also initialization, finalization, and assignment
of controlled objects cannot be aborted. An abortion is
delayed until the abort-deferred region has been com-
pleted. Because entering and leaving such abort-deferred
regions is again subsumed by the above list of events, coordinating
replicas by ensuring they follow the same
sequence of events is sufficient to guarantee consistency.
The semantics of Ada 95 will be preserved as abortions
occur between the same two abort-deferred regions on all
replicas.
4 Active and Passive Replication
Active replication is attractive because it offers a high
availability of the replicated partition: as replicas execute
in parallel, failures do not incur an additional overhead. It
seems that one could use active replication with the piece-wise
deterministic computation model by reaching a consensus
[BMD93] on events.
In fact, several systems employ this method, see e.g.
However, these are special-purpose reliable hard real-time
systems, not general systems. They are synchronous systems
and use severely constrained tasking models, where
inter-task communication is strongly reduced and task
scheduling is restricted or even performed off-line, prior to
run-time (static scheduling, e.g. in MARS). As a conse-
quence, there are relatively few events, and thus the needs
for interactive agreement are limited. Still, SIFT reports an
overhead of up to 80% for replica synchronization [Pra96,
p. 272] (admittedly assuming byzantine failures). MAFT
achieves better performance by delegating the consensus
protocols to dedicated hardware.
Nevertheless, this approach seems feasible for general
systems, too, if one assumes deterministic task scheduling.
This implies disallowing the use of any explicit time
dependency (delay statements, calls to package
Ada.Calendar or Ada.Real_Time). Under this assump-
tion, progress can be measured by the task dispatching
points passed. Consensus must then be reached only on the
task dispatching point at which a message is delivered 2 , as
message deliveries are the sole remaining source of non-
determinism. The most advanced task dispatching point
must be taken as the common consensus value 3 . Those replicas
that "lag behind" continue executing until this task
dispatching point is reached and deliver the message only
then. In this way, they all deliver all messages at the same
task dispatching point and hence the sets of tasks continue
to evolve identically on all replicas. The overhead of an
additional consensus for each message delivery may be
significant, but seems still tolerable.
However, this method violates replica transparency: forbidding
the use of delay statements is a severe restriction
and certainly not transparent. If delays are to be allowed,
replicas must reach consensus on each and every event, not
just on message deliveries! In a timed entry call for
instance, all replicas must agree whether or not the entry
call timed out, and if so, when this time-out occurred with
respect to other task scheduling decisions taken during the
delay. It is not sufficient if they agree only on the first condition
(whether or not a time-out occurred), and they are
thus forced to track and synchronize all task scheduling
decisions, not just time-outs.
In other words, active replicas supporting the full tasking
model of Ada 95 have to communicate over the net-work
(run a consensus protocol) for each and every task
scheduling decision. This seems impractical as it would
tremendously slow down a replicated partition. In fact, one
loses the main advantage of active replication, which is its
high availability.
Passive replication, on the other hand, does not suffer
from this communication overhead. As only the primary
replica executes a request, no interactive agreement protocol
is needed. However, in case of a failure, there's a higher
latency until one of the backups has taken over. Further-
more, passive replication either requires checkpointing
coordinated with output 4 , or remote calls must have the
semantics of nested transactions. This latter approach
2. Besides an earlier consensus needed for the totally ordered multicast
necessary to ensure the ordering condition given in section 2.
3. Although all replicas make the same scheduling decisions, they do not
execute in lockstep: one replica may already have advanced further
than another.
task body T is
begin
select
Trigger.Event;
then abort
loop
exit when .;
do something with X
. this use of X is erro-
neous. The increment
must be encapsulated
in an abort-deferred
region.
If this is aborted.
Fig. 1: Abortions in ATC
[Wol97] cannot be implemented transparently because
application-defined scheduling (through partial operations,
i.e. entry calls) may conflict with the constraints imposed
on scheduling by the serializability correctness criterion of
transactions. Such conflicts may result in deadlocks that
cannot be resolved in a transparent manner, and the transactional
nature of remote calls would have to be exposed to
the application [Wol98]. If transactions are to be offered in
Ada 95, they must therefore be integrated at the language
level 5 .
5 Semi-Active Replication
Since both active and passive replication have their deficiencies
when used for replication of non-deterministic
partitions in Ada 95, I focussed on semi-active replication
The piecewise deterministic computation model lends
itself readily to this form of replica organization, which
was pioneered in the Delta-4 project [Pow91]. Replicas are
organized as a view-synchronous group [SS93], and all
replicas execute incoming RPC requests. One replica is
designated the leader and is responsible for taking all non-deterministic
decisions. The other replicas - the followers
are then forced to make the same choices.
Replica synchronization can thus be achieved by logging
events on the leader. The leader then synchronizes its
followers using a FIFO-ordered reliable multicast [HT94].
The followers then replay the events as they occurred on
the leader. As a result, all replicas will go through the same
sequence of deterministic state intervals, which ensures
replica consistency.
Because events, instead of message deliveries, are
ordered and synchronized, semi-active replication has less
stringent requirements on the group communication layer
than active replication: other partitions may use a relatively
simple reliable multicast for communication with the
group of replicas instead of an expensive totally ordered
multicast. (The FIFO multicast needed for synchronization
within the group can be built easily upon the basic primitive
of reliable multicast.)
Nevertheless, synchronizing the followers at each and
every event would most probably be impractical: since
internal events are bound to occur very frequently, this
would entail a prohibitively high performance overhead as
each task scheduling decision would again involve communication
over the network for reaching agreement. Fortu-
4. Checkpointing a multi-tasked partition is not trivial: the complete tasking
state with program counters, stacks, etc. must be included. Check-pointing
is also limited to replicas running on homogeneous physical
nodes.
5. And in this case, transactional RPCs would constitute an inversion of
abstractions. In my opinion, transactions should be built upon RPCs,
not the other way round!
nately, this is not necessary. Synchronization is only
needed at observable events:
. Sending an RPC result back to the client.
. Sending an RPC request to some other partition.
As long as the effects of state intervals remain purely local
to the leader, the followers need not be informed of any
events. The followers must be brought up to date only once
the effects become visible to the rest of the system, i.e.,
when the leader sends a message beyond the group of repli-
cas. Before doing so, the leader must update its followers
in order to guarantee that they will reach the same state,
otherwise, a failure might corrupt the overall consistency
of the distributed application when one of the followers
becomes the new leader.
Between observable events, the leader just logs events
by buffering them in an event log. Just before it will execute
an observable event, it multicasts this log to its follow-
ers. Only then may it proceed and perform the observable
event. The log records an extended state interval, as it may
contain many simple state intervals (or rather, the events
delimiting them). Each observable event starts a new
extended state interval. A follower recreates extended state
intervals by replaying events from the logs it receives.
When it has replayed all the events in the received logs, it
just waits until the next extended state interval arrives from
the leader, or until it becomes the leader itself due to a failure
of the former leader.
A follower does not re-execute the observable event at
the very beginning of an extended state interval as this
would only result in a duplicate message being sent. It just
uses the logged outcome of this event's execution on the
leader to replay the event. The leader is thus the only replica
that interacts with the rest of the system beyond the
group of replicas.
While extended state intervals are generally started by
observable events, the leader is free to synchronize the followers
more tightly. This may be necessary when the event
log buffer on the leader threatens to overflow: in this case,
the leader has to send the log's current contents to the followers
in order to make room for new events. Given a reasonably
sized event buffer, the synchronization interval can
still be kept large enough to obtain acceptable performance
with this scheme.
Because followers do not participate in any interaction
beyond the group, failures of followers are completely
transparent with this replica organization. Upon a failure of
the leader, however, one of the followers must take on the
role of the new leader. It first replays any pending events it
already had received from the failed leader to bring itself
up to date with the last known state of the latter. It then
simply continues executing, henceforth logging events and
synchronizing the remaining followers.
Fig. 2 shows a failure of the leader L of a replicated par-
tition. It started executing a request req A , made a nested
remote call req B to some other partition, waited for the
result and continued processing req A before failing. Just
before sent the events for the extended state interval
S 1 to its followers, which then recreate this extended
state interval by replaying the logged events.
When L finally fails, a view change occurs and follower
F 1 is chosen as the new leader and continues executing
from just after S 1 . Note that the extended state interval S 3
may well be different from S 2 , but since S 2 could not possibly
have affected any other part of the system except L
itself, the overall state of the system remains consistent.
If the failure and the view change had occurred before
the reply rep B to the nested remote call had arrived, the
new leader F 1 would have had no way to tell whether or
not the former leader L did still send req B . A failure at
point p is indistinguishable from one at point r. Conse-
quently, the new leader F 1 has no choice but (re-)execute
the observable event req B . If L had failed at point r (or
later), this results in a duplicate request. An analogous situation
arises when F 1 finally sends the reply rep A back to
the client. If it fails after the synchronization of S 3 , but
before S 4 is synchronized, the then new leader F 2 again
does not know whether or not rep A has been sent.
This implies that partitions must be able to deal with
duplicate messages 6 . Messages must be tagged with a
unique, system-wide identifier. Repeated RPC result messages
are then simply ignored. Repeated invocations are
more difficult to handle. A partition only handles the first
RPC request message it receives. Later messages with the
same message identifier are ignored if they arrive while the
RPC is still in progress, or return the result of the first invocation
if the RPC already has completed. The latter case
necessitates that results of remote subprogram calls be
retained, and therefore some kind of garbage collection of
retained results must be provided (see section 6).
Semi-active replication based on the piecewise deterministic
computation model can offer transparent replication
of non-deterministic Ada 95 partitions. Replication
transparency (i.e., transparency towards other partitions)
is given at the application level, although partitions must be
able to handle duplicate messages correctly at the system
level. Yet the application level remains unaffected by this.
Replica transparency, i.e. transparency towards the application
level of the replicated partition itself also is main-
tained. The piecewise deterministic computation model
supports the full tasking model of Ada 95. However, replica
transparency in heterogeneous systems is only given as
long as only failures are considered, and thus holds only
for k-resiliency.
If recovery is to be included in the model, replica transparency
cannot fully be maintained in the general case. If a
new follower is to join a running group, it must get the
group's current state. This state transfer can only be done
transparently in a homogeneous system by taking a system-level
checkpoint on one of the old group members and
installing this checkpoint in the newly joining replica. If
replicas execute on heterogeneous physical nodes, this
state transfer is only possible for a restricted subset of all
possible partitions and furthermore requires the cooperation
of the application itself [Wol98]. In this case, replica
transparency cannot be upheld totally.
6 RAPIDS
RAPIDS ("Replicated Ada Partitions In Distributed Sys-
tems") [Wol98] is an implementation of the semi-active
replication scheme based on a piecewise deterministic
computation model as presented above for the GNAT com-
piler. It is implemented within the run-time support and is
thus largely transparent to the application.
The core of RAPIDS is the event log buffer, which is
implemented within the PCS as a child package of Garlic
[KPT95] called System.Garlic.Rapids. This choice
was made because synchronization between the leader and
6. The reliable multicast primitive assumed for communication of clients
and servers with a replicated partition (i.e., a group of replicas) already
makes duplicate message detection necessary. However, here one needs
a second duplicate message detection scheme at a higher level. It would
be beneficial if this high-level duplicate message detection could
exploit the fact that such a facility already exists in the (hidden) low-level
protocols of the group communication layer.
rep A
sync
sync
View change
Client
r
Fig. 2: A Failure in Semi-Active Replication
its followers is triggered by observable events, i.e. sending
messages, and these event occur within the PCS. Further-
more, the actual synchronization involves multicasting a
message within the group, which can be done conveniently
through a new protocol added to Garlic. This new protocol
is only an interface to a third-party view-synchronous
group communication system. Currently, RAPIDS uses the
Phoenix [MFSW95] toolkit, but any other group communication
system can be used as long as it satisfies view synchrony
and offers a reliable multicast primitive.
The PCS has also been modified to include unique message
identifiers in all messages. Using these, duplicate
message detection is implemented. Garbage collection of
retained results has also been included in Garlic. Whenever
a partition A sends a message to another partition B, it piggybacks
information about the messages it already
received from B. Partition B can then discard these messages
RAPIDS actually offers three different interfaces for logging
and replaying events and for transferring the event log
from the leader to the followers. The PCS uses direct calls
to a first interface given by System.Garlic.Rapids to
handle external events. To handle internal events, the tasking
support GNARL has been modified to use a callback
interface to RAPIDS for logging and replaying events for all
task scheduling decisions. Some other packages of the run-time
support also use callbacks to RAPIDS, e.g. Sys-
tem.Finalization_Implementation uses this to log
and replay events regarding entry and exit of the abort-
deferred regions given by initialization and finalization of
controlled objects. Finally, there is a public interface to the
event log in System.RPC.Replication (shown in fig.
for use by the standard libraries, which also may have to
log and replay events, e.g. Ada.Calendar.Clock of file
accesses.
The event log buffer is organized as a heterogeneous
FIFO list, storing event descriptors derived from the
abstract tagged type Event. A leader can append events to
the log using the Log subprogram, and can multicast its log
to its followers using the Send_Log operation, which also
empties the log. Each event also contains a unique group-wide
task identifier for the task involved. A follower
replays events in the order they have been logged. The Get
subprogram blocks the calling task until an event matching
the tag of the actual parameter E and that tasks's group-wide
ID is frontmost in the log. It then returns the event
with Valid set to True. (On a leader, Get returns immediately
with Valid set to False.) With the Remove opera-
tion, a follower can actually remove the frontmost event
from the log once it has replayed the event. This makes the
next event in the log become the new frontmost event, and
thus some other call to Get may be unblocked.
If the log on a follower is empty, i.e., has been wholly
replayed, Get also blocks until either the next extended
state interval is received from the leader and a matching
event appears at the front of the log, or the follower
becomes a leader when the old leader has failed. In this
case, Get returns even if no matching event has appeared
(once the log has been exhausted, cf. section 5) with Valid
set to False: as the former follower is now a leader, it may
make its own choices.
With this interface, event logging and replay can be
implemented following the pattern shown in fig. 4.
package System.RPC.Replication is
type Event is abstract tagged private;
procedure Log
procedure Get
procedure Remove;
procedure
private
Fig. 3: Public Interface of the Event Log
Fig. 4: Pattern for Event Logging and Replay
with System.RPC.Replication;
package Example is
package Repl renames System.RPC.Replication;
type Event is new Repl.Event with
record
- The characterizing data of the event.
for Event'External_Tag use ``Example.Event'';
procedure The_Operation (.) is
The_Event
begin
Repl.Get (The_Event, Is_Follower);
if Is_Follower then
- Use the description in 'The_Event' to
- replay the event. Then remove it:
else - Leader!
- If it's an observable event, send the
- log to the followers.
Only if observable!
- Do the event.
- Log the event.
The_Event := (Repl.Event with .);
Repl.Log (The_Event);
Event replay on a follower is atomic: other tasks that
also might call Get for other events will remain blocked
until this event is removed. On the leader, event logging
must be done in the right places. Do_Operation itself
should not cause additional non-deterministic events. (If it
did, Do_Operation instead of The_Operation should be
implemented using this pattern.) Also, the event must be
logged in the right moment. Consider for example the
event of locking a protected object. Obviously, a task must
first get the lock and then log the event for it; if it logged
the event first, some other task might actually get the lock
first, and the event log would be inconsistent with the
actual execution history.
Group-wide task identifiers are implemented through
task creation events. These events contain both the group-wide
ID of the creating task and that of the newly created
task. When a follower replays the event, it therefore also
knows which group-wide ID it must assign to the new task.
References to time such as delay statements or calls to
Ada.Calendar.Clock all generate events. RAPIDS implements
a mapping of time values such that all replicas
always run logically on the initial leader's time base. This
avoids that time suddenly jumps backwards when a failure
occurs and thus guarantees the monotonicity of time for the
application.
Not all internal events are logged by RAPIDS. It makes a
distinction between system tasks, which are local to the
run-time support, and application tasks. Only events
involving application tasks are logged and replayed. The
system tasks, however, execute independently on all repli-
cas: they have to, because the run-time support must do
different things on the leader than on a follower. At the
interface of the PCS, tasks may change their status: an
application task calling e.g. System.RPC.Do_RPC
becomes a system task inside that call, and conversely a
system task becomes an application task for the time it executes
a remotely called subprogram.
RAPIDS is currently (Nov 1998) still in a prototype stage.
It still needs serious optimization efforts, and it doesn't yet
handle dynamically bound remote calls through remote
access-to-subprogram or remote access-to-class-wide
values. Also, events due to assignments of controlled
objects are not yet handled, as this seems to require some
cooperation of the compiler's part.
7 Conclusion
Modeling executions of Ada 95 partitions using a piece-wise
deterministic computation model overcomes the problems
due to non-determinism that occur in replication. The
model, together with the abstractions in the language stan-
dard, abstracts from timing dependencies and thus makes
replication possible. It seems that semi-active replication is
the most appropriate replication scheme for general Ada 95
partitions using the full tasking model of the language.
A prototype of a replication manager called RAPIDS
("Replicated Ada Partitions in Distributed Systems") has
been developed. It guarantees replica consistency by logging
non-deterministic events on the leader and replaying
them on the followers. Although this project is still in an
early prototype stage, first results are encouraging, indicating
that efficient replication is attainable using this model.
--R
"The Delta-4 Extra Performance Architecture XPA"
"The Consensus Problem in Fault-Tolerant Comput- ing"
Manetho: Fault Tolerance in Distributed Systems using Rollback Recovery and Process Replication
"A Modular Approach to Fault-Tolerant Broadcasts and Related Problems"
ISO: International Standard ISO/IEC 8652:
"Real-Time Systems Development: The Programming Model of MARS"
"GAR- LIC: Generic Ada Reusable Library for Inter-partition Communication"
"The MAFT Architecture for Distributed Fault Tolerance"
"Phoenix: A Toolkit for Building Fault- Tolerant, Distributed Applications in Large-Scale Networks"
"Implementing Fault-Tolerant Services using the State Machine Approach"
"Understanding the power of the virtually-synchronous model"
"Optimistic Recovery in Distributed Systems"
"SIFT: Design and Analysis of a Fault-Tolerant Computer for Aircraft Control"
"Fault Tolerance in Distributed Ada 95"
Replication of Non-Deterministic Objects
--TR | fault tolerance;piecewise determinism;distributed systems;semi-active replication;group communication |
334808 | The Martin Boundary of the Young-Fibonacci Lattice. | In this paper we find the Martin boundary for the Young-Fibonacci lattice YF. Along with the lattice of Young diagrams, this is the most interesting example of a differential partially ordered set. The Martin boundary construction provides an explicit Poisson-type integral representation of non-negative harmonic functions on YF. The latter are in a canonical correspondence with a set of traces on the locally semisimple Okada algebra. The set is known to contain all the indecomposable traces. Presumably, all of the traces in the set are indecomposable, though we have no proof of this conjecture. Using an explicit product formula for Okada characters, we derive precise regularity conditions under which a sequence of characters of finite-dimensional Okada algebras converges. | Introduction
The Young-Fibonacci lattice YF is a fundamental example of a differential partially
ordered set which was introduced by R. Stanley [St1] and S. Fomin [F1]. In many
ways, it is similar to another major example of a differential poset, the Young lattice
Y. Addressing a question posed by Stanley, S. Okada has introduced [Ok] two algebras
associated to YF. The first algebra F is a locally semisimple algebra defined by generators
and relations, which bears the same relation to the lattice YF as does the group
algebra C S1 of the infinite symmetric group to Young's lattice. The second algebra
R is an algebra of non-commutative polynomials, which bears the same relation to the
lattice YF as does the ring of symmetric functions to Young's lattice.
The purpose of the present paper is to study some combinatorics, both finite and
asymptotic, of the lattice YF. Our object of study is the compact convex set of harmonic
functions on YF (or equivalently the set of positive normalized traces on F or
certain positive linear functionals on R.) We address the study of harmonic functions
by determining the Martin boundary of the lattice YF. The Martin boundary is the
set consisting of those harmonic functions which can be obtained by finite
rank approximation. There are two basic facts related to the Martin boundary con-
struction: 1) every harmonic function is represented by the integral of a probability
measure on the Martin boundary, and 2) the set of extreme harmonic functions is a
subset of the Martin boundary (see, e.g., [D]).
This paper gives a parametrization of the Martin boundary for YF and a description
of its topology.
The Young-Fibonacci lattice is described in Section 2, and preliminaries on harmonic
functions are explained in Section 3. A first rough description of our main results is
given at the end of Section 3. precise description of the parametrization of harmonic
functions is found in Section 7, and the proof, finally, is contained in Section 8.) Section
contains some general results on harmonic functions on differential posets.
The main tool in our study is the Okada ring R and two bases of this ring, introduced
by Okada, which are in some respect analogous to the Schur function basis and the
power sum function basis in the ring of symmetric functions (Section 5). We describe
the Okada analogs of the Schur function basis by non-commutative determinants of
tridiagonal matrices with monomial entries. We obtain a simple and explicit formula
for the transition matrix (character matrix) connecting the s-basis and the p-basis,
and also for the value of (the linear extension of) harmonic functions evaluated on the
p-basis. This is done in Sections 6 and 7.
The explicit formula allows us to study the regularity question for the lattice YF, that
is the question of convergence of extreme traces of finite dimensional Okada algebras Fn
to traces of the inductive limit algebra . The regularity question is studied
in Section 8.
The analogous questions for Young's lattice Y(which is also a differential poset) were
answered some time ago. The parametrization of the Martin boundary of Y has been
studied in [Th], [VK]. A different approach was recently given in [Oku].
A remaining open problem for the Young-Fibonacci lattice is to characterize the set
of extreme harmonic functions within the Martin boundary. For Young's lattice, the
set of extreme harmonic functions coincides with the entire Martin boundary.
Acknowledgement
. The second author thanks the Department of Mathematics, University
of Iowa, for a teaching position in the Spring term of 1993. Most part of the
present work was completed during this visit.
4 FREDERICK M. GOODMAN AND SERGEI V. KEROV
x2. The Young-Fibonacci lattice
In this Section we recall the definition of Young-Fibonacci modular lattice (see Figure
1) and some basic facts related to its combinatorics. See Section A.1 in the Appendix
for the background definitions and notations related to graded graphs and differential
posets. We refer to [F1-2], [St1-3] for a more detailed exposition.
A simple recurrent construction.
The simplest way to define the graded graph
n=0 YFn is provided by the
following recurrent procedure.
Let the first two levels YF 0 and YF 1 have just one vertex each, joined by an edge.
Assuming that the part of the graph YF, up to the nth level YFn , is already constructed,
we define the set of vertices of the next level YFn+1 , along with the set of adjacent edges,
by first reflecting the edges in between the two previous levels, and then by attaching
just one new edge leading from each of the vertices on the level YFn to a corresponding
new vertex at level n + 1.
@
@ @
@
@
@ @
@
@
@ @
@
@
@ @
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
@
@
@ @
@
@
@ @
Figure
1. The Young-Fibonacci lattice.
In particular, we get two vertices in the set YF 2 , and two new edges: one is obtained
by reflecting the only existing edge, and the other by attaching a new one. More
generally, there is a natural notation for new vertices which helps to keep track of the
inductive procedure. Let us denote the vertices of YF 0 and YF 1 by an empty word ?
and 1 correspondingly. Then the endpoint of the reflected edge will be denoted by 2,
and the end vertex of the new edge by 11. In a similar way, all the vertices can be
labeled by words in the letters 1 and 2. If the left (closer to the root ?) end of an edge
is labeled by a word v, then the endvertex of the reflected edge is labeled by the word
2v. Each vertex w of the nth level is joined to a vertex 1w at the next level by a new
edge (which is not a reflection of any previous edge).
Clearly, the number of vertices at the nth level YFn is the nth Fibonacci number fn .
Basic definitions.
We now give somewhat more formal description of the Young-Fibonacci lattice and
its Hasse diagram.
Definition. A finite word in the two-letter alphabet f1; 2g will be referred to as a
Fibonacci word. We denote the sum of digits of a Fibonacci word w by jwj, and we call
it the rank of w. The set of words of a given rank n will be denoted by YFn , and the set
of all Fibonacci words by YF. The head of a Fibonacci word is defined as the longest
contiguous subword of 2's at its left end. The position of a 2 in a Fibonacci word is one
more than the rank of the subword to the right of the 2; that is if then the
position of the indicated 2 is jvj + 1.
Next we define a partial order on the set YF which is known to make YF a modular
lattice. The order will be described by giving the covering relations on YF in two
equivalent forms.
Given a Fibonacci word v, we first define the set v ae YF of its successors. By
definition, this is exactly the set of words w 2 YF which can be obtained from v by one
of the following three operations:
(i) put an extra 1 at the left end of the word v;
(ii) replace the first 1 in the word v (reading left to right) by 2;
anywhere in between 2's in the head of the word v, or immediately after
the last 2 in the head.
Example. Take 222121112 for the word v of rank 14. Then the group of 3 leftmost 2's
forms its head, and v has 5 successors, namely
The changing letter is shown in boldface. Note that the ranks of all successors of a
Fibonacci word v are one bigger than that of v.
The set v of predecessors of a non-empty Fibonacci word v can be described in a
similar way. The operations to be applied to v in order to obtain one of its predecessors
are as follows:
(i) the leftmost letter 1 in the word v can be removed;
(ii) any one of 2's in the head of v can be replaced by 1.
6 FREDERICK M. GOODMAN AND SERGEI V. KEROV
Example. The word predecessors, namely
We write u % v to show that v is a successor of u (and u is a predecessor of v).
This is a covering relation which determines a partial order on the set YF of Fibonacci
words. As a matter of fact, it is a modular lattice, see [St1]. The initial part of the
Hasse diagram of the poset YF is represented in Figure 1.
The Young-Fibonacci lattice as a differential poset.
Assuming that the head length of v is k, the word v has
predecessors, if v contains at least one letter 1. If is made of 2's only, it
predecessors. Note that the number of successors is always
one bigger than that of predecessors. Another important property of the lattice YF is
that, for any two different Fibonacci words v 1 , v 2 of the same rank, the number of their
common successors equals that of common predecessors (both numbers can only be 0 or
1). These are exactly the two characteristic properties (D1), (D2) of differential posets,
see Section A.1. In what follows we shall frequently use the basic facts on differential
posets, surveyed for the readers convenience in the Appendix. Much more information
on differential posets and their generalizations can be found in [F1], [St1].
The Okada algebra.
Okada [Ok] introduced a (complex locally semisimple) algebra F , defined by generators
and relations, which admits the Young-Fibonacci lattice YF as its branching
diagram. The Okada algebra has generators (e i ) i-1 satisfying the relations:
2.
The algebra Fn generated by the first n\Gamma1 generators e these identities
is semisimple of dimension n!, and has simple modules M v labelled by elements
For one has u % v if, and only if, the simple Fn-module M v ,
restricted to the algebra Fn\Gamma1 contains the simple Fn\Gamma1 -module Mu . As a matter of
fact, the restrictions of simple Fn-modules to Fn\Gamma1 are multiplicity free.
x3. Harmonic functions on graphs and traces of AF-algebras
In this Section, we recall the notion of harmonic functions on a graded graph and the
classical Martin boundary construction for graded graphs and branching diagrams. We
discuss the connection between harmonic functions on branching diagrams and traces
on the corresponding AF -algebra. Finally, we give a preliminary statement on our main
results on the Martin boundary of the Young-Fibonacci lattice.
We refer the reader to Appendix A.1 for basic definitions on graded graphs and
branching diagrams and to [E], [KV] for more details on the combinatorial theory of
AF -algebras.
The Martin boundary of a graded graph.
A function R defined on the set of vertices of a graded graph \Gamma is called
harmonic, if the following variant of the "mean value theorem" holds for all vertices
w:u%w
We are interested in the problem of determining the space H of all non-negative harmonic
functions normalized at the vertex ? by the condition
compact convex set with the topology of pointwise convergence, it is interesting to ask
about its set of extreme points.
A general approach to the problem of determining the set of extreme points is based
on the Martin boundary construction (see, for instance, [D]). One starts with the dimension
function d(v; w) defined as the number of all oriented paths from v to w. We
put
From the point of view of potential theory, d(v; w) is the Green function with respect
to "Laplace operator"
w:u%w
This means that if /w fixed vertex w, then \Gamma(\Delta/ w
\Gamma. The ratio
is usually called the Martin kernel.
Consider the space Fun(\Gamma) of all functions with the topology of pointwise
convergence, and let e
E be the closure of the subset e
ae Fun(\Gamma) of functions v 7! K(v; w),
those functions are uniformly bounded, 0 - K(v; w) - 1, the space e
(called the Martin compactification) is indeed compact. One can easily check that e \Gamma ae e
is a dense open subset of e
Its boundary
called the Martin boundary of
the graph \Gamma.
By definition, the Martin kernel (3.3) may be extended by continuity to a function
R. For each boundary point
negative, harmonic, and normalized. Moreover, harmonic functions have an integral
representation similar to the classical Poisson integral representation for non-negative
harmonic functions in the disk:
8 FREDERICK M. GOODMAN AND SERGEI V. KEROV
Theorem (cf. [D]). Every normalized non-negative harmonic function ' 2 H admits
an integral representation
Z
where M is a probability measure. Conversely, for every probability measure M on E,
the integral (3.4) provides a non-negative harmonic function ' 2 H.
All indecomposable (i.e., extreme) functions in H can be represented in the form
and we denote by Emin the
corresponding subset of the boundary E. It is known that Emin is a non-empty G ffi
subset of E. One can always choose the measure M in the integral representation
(3.4) to be supported by Emin . Under this assumption, the measure M representing a
function
Given a concrete example of a graded graph, one looks for an appropriate "geometric"
description of the abstract Martin boundary. The purpose of the present paper is to
give an explicit description for the Martin boundary of the Young - Fibonacci graph
YF.
The traces on locally semisimple algebras.
We next discuss the relation between harmonic functions on a graded graph and
traces on locally semisimple algebras. A locally semisimple complex algebra A (or AF-
algebra) is the union of an increasing sequence of finite dimensional semisimple complex
algebras, An . The branching diagram or Bratteli diagram \Gamma(A) of a locally
semisimple algebra A (more precisely, of the approximating sequence fAng) is a graded
graph whose vertices of rank n correspond to the simple An-modules. Let M v denote
the simple An-module corresponding to a vertex . Then a vertex v of rank n
and a vertex w of rank are joined by -(v; w) edges if the simple An+1 module
Mw , regarded as an An module, contains M v with multiplicity -(v; w). We will assume
here that all multiplicities -(v; w) are 0 or 1, as this is the case in the example of
the Young-Fibonacci lattice with which we are chiefly concerned. Conversely, given a
branching diagram \Gamma - that is, a graded graph with unique minimal vertex at rank 0
and no maximal vertices - there is a locally semisimple algebra A such that
A trace on a locally semisimple algebra A is a complex linear functional / satisfying
To each trace / on A, there corresponds a positive normalized harmonic function ~
/ on
whenever v has rank n and e is a minimal idempotent in An such that eM v 6= (0)
and fvg. The trace property of / implies that ~
/ is a well
defined non-negative function on \Gamma, and harmonicity of ~
/ follows from the definition of
the branching diagram \Gamma(A). Conversely, a positive normalized harmonic function ~
/ on
defines a trace on A; in fact, a trace on each An is determined by its value
on minimal idempotents, so the assignment
whenever e is a minimal idempotent in An such that eM v 6= (0), defines a trace on
An . The harmonicity of ~
/ implies that the / (n) are coherent, i.e., the restriction of
/ (n+1) from An+1 to the subalgebra An coincides with / (n) . As a result, the traces / (n)
determine a trace of the limiting algebra
The set of traces on A is a compact convex set, with the topology of pointwise
convergence. The map ~
is an affine homeomorphism between the space of positive
normalized harmonic functions on and the space of traces on A. From the
point of view of traces, the Martin boundary of \Gamma consists of traces / which can be
obtained as limits of a sequence /n , where /n is an extreme trace on An . All extreme
traces on A are in the Martin boundary, so determination of the Martin boundary is a
step towards determining the set of extreme traces on A.
The locally semisimple algebra corresponding to the Young-Fibonacci lattice YF is
the Okada algebra F introduced in Section 2.
The main result.
We can now give a description of the Martin boundary of the Young-Fibonacci lattice
(and consequently of a Poisson-type integral representation for non-negative harmonic
functions on YF).
Let w be an infinite word in the alphabet f1; 2g (infinite Fibonacci word),
and let d the positions of 2's in w. The word w is said to be summable
if, and only if, the series
equivalently, the product
Y
converges.
As for any differential poset, the lattice YF has a distinguished harmonic function
'P , called the Plancherel harmonic function; 'P is an element of the Martin boundary.
The complement of f'P g in the Martin boundary of YF can be parametrized with two
parameters (fi; w); here fi is a real number, 0 ! fi - 1, and w is a summable infinite
word in the alphabet f1; 2g.
We denote
by\Omega the parameter space for the Martin boundary:
Definition. Let the
space\Omega be the union of a point P and the set
summable infinite word in the alphabet f1; 2gg;
with the following topology: A sequence (fi (n) ; w (n) ) converges to P iff
A sequence (fi (n) ; w (n) ) converges to (fi; w) if, and only if,
We will describe in Section 7 the mapping ! 7! '!
from\Omega to the set of normalized
positive harmonic functions on YF.
We are in a position now to state the main result of the paper.
Theorem 3.2. The map ! 7! '! is a homeomorphism of the
space\Omega onto the Martin
boundary of the Young-Fibonacci lattice. Consequently, for each probability measure M
on\Omega , the integral
Z
provides a normalized, non-negative harmonic function on the Young-Fibonacci lattice
YF. Conversely, every such function admits an integral representation with respect to a
measure M
on\Omega (which may not be unique).
In general, for all differential posets, we show that there is a flow
on [0; 1] \Theta H with the properties
ts (') and C 0
For the Young-Fibonacci lattice, one has C t (' fi;w In
particular, the flow on H preserves the Martin boundary. It is not clear whether this is
a general phenomenon for differential posets.
We have not yet been able to characterize the extreme points within the Martin
boundary of YF. In a number of similar examples, for instance the Young lattice, all
elements of the Martin boundary are extreme points.
x4. Harmonic functions on differential posets
The Young-Fibonacci lattice is an example of a differential poset. In this section,
we introduce some general constructions for harmonic functions on a differential poset.
Later on in Section 7 we use the construction to obtain the Martin kernel of the graph
YF.
Type I harmonic functions.
In this subsection we don't need any special assumptions on the branching diagram
\Gamma. Consider an infinite path
in \Gamma. For each vertex the sequence fd(u; vn )g 1
n=1 is weakly increasing, and we
shall use the notation
Note that d(u; if the sequences t; s coincide eventually.
Lemma 4.1. The following conditions are equivalent for a path t in \Gamma:
(i) All but finitely many vertices vn in the path t have the single immediate predecessor
.
(iv) There are only finitely many paths which eventually coincide with t.
Proof. It is clear that d(?; is the only predecessor of vn . Since
(i). The number of paths
equivalent to t is exactly d(?; t).
In case is the Young lattice, there are only two paths (i.e. Young tableaux) with
these conditions: In case of Young-
Fibonacci lattice there are countably many paths satisfying the conditions of Lemma
4.1. The vertices of such a path eventually take the form
Fibonacci word v of rank m. Hence, the equivalence class of eventually coinciding paths
in YF with the properties of Lemma 4.1 can be labelled by infinite words in the alphabet
f2; 1g with only finite number of 2's. We denote the set of such words as 1 1 YF.
Proposition 4.2. Assume that a path t in \Gamma satisfies the conditions of Lemma 4.1.
Then
is a positive normalized harmonic function on \Gamma.
Proof. Since
w:v%w d(w; t), the function ' t is harmonic. Also, ' t (v) - 0 for
all
We say that these harmonic functions are of type I, since the corresponding AF -
algebra traces are traces of finite - dimensional irreducible representations (type I factor -
representations). It is clear that all the harmonic functions of type I are indecomposable.
Plancherel harmonic function.
Let us assume now that the poset \Gamma is differential in the sense of [St1] or, equivalently,
is a self-dual graph in terms of [F1]. The properties of differential posets which we need
are surveyed in the Appendix.
Proposition 4.3. The function
is a positive normalized harmonic function on the differential poset \Gamma.
Proof. This follows directly from (A.2.1) in the Appendix.
Note that if is the Young lattice, the function 'P corresponds to the Plancherel
measure of the infinite symmetric group (cf. [KV]).
Contraction of harmonic functions on a differential poset.
Assume that \Gamma is a differential poset. We shall show that for any harmonic function
' there is a family of affine transformations, with one real parameter - , connecting the
Plancherel function 'P to '.
Proposition 4.4. For 0 - 1 and a harmonic function ', define a function C - (')
on the set of vertices of the differential poset \Gamma by the formula
juj=k
Then C - (') is a positive normalized harmonic function, and the map ' 7! C t (') is
affine.
Proof. We introduce the notation
juj=k
First we observe the identity
w:v%w
which is obtained from a straightforward computation using (A.2.3) from the Appendix,
and the harmonic property (3.1) of the function '. From this we derive that
w:v%w
w:v%w
This shows that C - (') is harmonic. It is easy to see that C - (') is normalized and
positive, and that the map ' 7! C t (') is affine.
Remarks. (a) The semigroup property holds: C t (C s st ('); (b) C 0
'P , for all ', and C t ('P These statements
can be verified by straightforward computations.
Example. Let ' denote the indecomposable harmonic function on the Young lattice
with the Thoma parameters (ff; fi; fl), see [KV] for definitions. Then the function C - (')
is also indecomposable, with the Thoma parameters (- ff; -
Central measures and contractions.
Recall (see [KV]) that for any harmonic function ' on \Gamma there is a central measure
M ' on the space T of paths of \Gamma, determined by its level distributions
In particular,
There is a simple probabilistic description of the central measure corresponding to
a harmonic function on a differential poset obtained by the contraction of Proposition
4.4. Define a random vertex by the following procedure:
(a) Choose a random k, 0 - k - n with the binomial distribution
(b) Choose a random vertex
(c) Start a random walk at the vertex u, with the Plancherel transition probabilities
Let v denote the vertex at which the random walk first hits the n'th level set \Gamma n . We
denote by M (-;')
n the distribution of the random vertex v.
14 FREDERICK M. GOODMAN AND SERGEI V. KEROV
Proposition 4.5. The distribution M (-;')
n is the n'th level distribution of the central
measure corresponding to the harmonic function C - ('):
Proof. It follows from (A.2.1) that (4.10) is a probability distribution. By Lemma A.3.2,
the probability to hit \Gamma n at the vertex v, starting the Plancherel walk at
The Proposition now follows from the definition of the contraction C - (') written in the
Example. Y be the Young lattice and let be the one-
row Young tableau. Then the distribution (4.9) is trivial, and the procedure reduces
to choosing a random row diagram (k) with the distribution (4.8) and applying the
Plancherel growth process until the diagram will gain n boxes.
x5. Okada clone of the symmetric function ring
In this Section we introduce the Okada variant of the symmetric function algebra,
and its two bases analogous to the Schur function basis and the power sum basis. The
Young-Fibonacci lattice arises in a Pieri-type formula for the first basis.
The rings R and
denote the ring of all polynomials in two non-commuting
variables X;Y . We endow R with a structure of graded ring,
n=0 Rn , by declaring
the degrees of variables to be deg 2. For each word
let h v denote the monomial
Then Rn is a Q-vector space with the fn (Fibonacci number) monomials h v as a basis.
We let denote the inductive limit of linear spaces Rn , with respect
to imbeddings Q 7! QX. Equivalently, is the quotient of R by the
principal left ideal generated by X \Gamma 1. Linear functionals on R1 are identified with
linear functionals ' on R which satisfy '(fX). The ring R1 has a similar
r-ole for the Young-Fibonacci lattice and the Okada algebra F as the ring of symmetric
functions has for the Young lattice and the group algebra of the infinite symmetric
group S1 (see [M]).
Non-commutative Jacobi determinants.
The following definition is based on a remark which appeared in the preprint version
of [Ok]. We consider two non-commutative n-th order determinants
and
By definition, the non-commutative determinant is the expression
w2Sn
sign(w) a w(1)1 a
In other words, the k-th factor of every term is taken out of the k-th column. Note that
polynomials (5.3), (5.4) are homogeneous elements of R, deg
Following Okada, we define Okada-Schur polynomials (or s-functions) as the products
(cf. [Ok], Proposition 3.5). The polynomials s v for are homogeneous of degree
n, and form a basis of the linear space Rn . We define a scalar product on the
space R by declaring s-basis to be orthonormal.
The branching of Okada-Schur functions.
We will use the formulae
obtained by decomposing the determinants (5.3), (5.4) along the last column. The first
identity is also true for case of the second identity
(5.8) can be written in the form
One can think of (5.9) as of a commutation rule for passing X over a factor of type Q 0 .
It is clear from (5.9) that
It will be convenient to rewrite (5.7), (5.8) in a form similar to (5.9):
The following formulae are direct consequences of (5.10) - (5.12):
It is understood in (5.13), (5.14) that n - 1.
The formulae (5.10), (5.13) and (5.14) imply
Theorem 5.1 (Okada). For every w 2 YFn the product of the Okada-Schur determinant
s w by X from the right hand side can be written as
This theorem says that the branching of Okada s-functions reproduces the branching
law for the Young-Fibonacci lattice. In the following statement, U is the "creation
operator" on Fun(YF), which is defined in the Appendix, (A.1.1).
Corollary 5.2. The assignment \Theta : v 7! s v extends to a linear isomorphism
taking Fun(YFn ) to Rn and satisfying \Theta ffi
Because of this, we will sometimes write U(f) instead of fX for f 2 R, and D(f)
for \Theta ffi D ffi \Theta \Gamma1 (f ), see (A.1.2) for the definition of D.
Corollary 5.3. There exist one-to-one correspondences between:
(a) Non-negative, normalized harmonic functions on YF;
(b) Linear functionals ' on Fun(YF) satisfying
(c) Linear functionals ' on R satisfying
(d) Linear functionals on
s v denotes the image of s v in
Traces of the Okada algebra F1 .
We refer to linear functionals ' on R satisfying '(s v positive linear functionals
The Okada p-functions.
Following Okada [Ok], we introduce another family of homogeneous polynomials,
labelled by Fibonacci words v 2 YF,
where
One can check that fp v g jvj=n is a Q-basis of Rn . Two important properties of the
p-basis which were found by Okada are:
Since the images of pu and of p 1u in R1 are the same, we can conveniently denote the
image by p 1 1 u . The family of p v , where v ranges over 1 1 YF, is a basis of R1 .
Transition matrix from s-basis to p-basis.
We denote the transition matrix relating the two bases fpug and fsw g by X v
jvj=n
The coefficients X v
are analogous to the character matrix of the symmetric group Sn .
They were described recurrently in [Ok], Section 5, as follows:
where m(u) is defined in (6.2) below. An explicit product expressions for the X v
be given in the next section.
x6. A product formula for Okada characters
In this Section we improve Okada's description of the character matrix X v
u to obtain
the product formula (6.11) and its consequences.
Some notation.
We recall some notation from [Ok] which will be used below. Let v be a Fibonacci
word:
Then:
if the rightmost digit of v is 1, and
is the number of 1's at the left end of v.
(6.3) The rank of v, denoted jvj, is the sum of the digits of v.
then the position of the indicated 2 is jv
1). In other words, d(v) is the product of the
positions of 2's in v.
(6.7) The block ranks of v are the numbers k 0
(6.8) The inverse block ranks of v are k t
Consider a sequence - positive integers with
a word v 2 YFn -
n-splittable, if it can be written as a concatenation
Lemma 6.1. Let - be the sequence of block ranks in a Fibonacci
word u. Then
only if, the word v is -
n-splittable.
is the -
n-splitting, then
where
Proof. This is a direct consequence of Okada's recurrence relations cited in the previous
Proposition 6.2. Let u; v 2 YFn . Let be the positions of 2's in the word
u, and put r the positions of 2's in the word v. Then
Y
Y
Proof. This can also be derived directly from Okada's recurrence relations, or from the
previous lemma. Note in particular that X v
only if, d
and j; this is the case if, and only if, v does not split according to the block ranks of
u.
We define ~
u ; from Proposition 6.2 and the dimension formula (6.5),
we have the expression
Y
Y
d s
The inverse transition matrix.
According to [Ok], Proposition 5.3, the inverse formula to Equation (5.18) can be
written in the form
juj=n
pu
We will give a description of a column X v
u for a fixed v.
Lemma 6.3. Let - be the sequence of inverse block ranks n
in a word
only if, the word u is - n-splittable.
(ii) If is the - n-splitting for u, then
where
ae \Gamma1; if ffl(u
Here denotes the number of 1's at the left end of
Proof. This is another corollary of Okada's recurrence relations cited in Section 5.
M. GOODMAN AND SERGEI V. KEROV
x7. The Martin Boundary of the Young-Fibonacci Lattice
In this section, we examine certain elements of the Martin boundary of the Young-
Fibonacci lattice YF. Ultimately we will show that the harmonic functions listed here
comprise the entire Martin boundary.
It will be useful for us to evaluate normalized positive linear functionals on the ring
(corresponding to normalized positive harmonic functions on YF) on the basis fpug.
The first result in this direction is the evaluation of the Plancherel functional on these
basis elements.
Proposition 7.1. 'P (p u words u containing at least one 2.
Proof. It follows from the definition of the Plancherel harmonic function 'P that for
v:v%w
Therefore, for all f 2 Rn ,
If
since Dp
For each word w 2 YFn , the path (w; 1w; 1 clearly satisfies the conditions of
Proposition 4.1, and therefore there is a type I harmonic function on YF defined by
Proposition 7.2. Let w 2 YFn , and let d be the positions of 2's in w. Let u
be a word in 1 1 YF containing at least one 2, and let be the positions of
2's in u. Then:
Y
Y
Proof. Let
Thus the result follows from Equation (6.12).
Next we describe some harmonic functions which arise from summable infinite words.
Given a summable infinite word w, define a linear functional on the ring R1 by the
requirements 'w
Y
Y
where are the positions of 2's in u, and the d j 's are the
positions of 2's in w. It is evident that 'w (p u that 'w is in
fact a functional on R1 .
Proposition 7.3. If w is a summable infinite Fibonacci word, then 'w is a normalized
positive linear functional on R1 , so corresponds to a normalized positive harmonic
function on YF.
Proof. Only the positivity needs to be verified. Let wn be the finite word consisting
of the rightmost n digits of w. It follows from the product formula for the normalized
characters /wn that 'w (p u
/wn (p u ). Therefore also 'w (s v
Given a summable infinite Fibonacci word w and 0 - fi - 1, we can define the
harmonic function ' fi;w by contraction of 'w , namely, '
For denote the essential rank of u, namely
ffi is the position of the leftmost 2 in u, and
Proposition 7.4. Let w be a summable infinite word and 0 - fi - 1. Let u
Then
Proof. The case
any linear functional ' on R1 , one has '(pu
A w
jxj=n
In particular,
A w
jvj=n
Note that UA w
It follows from the definitions of ' fi;w (cf.
(4.4)) and of A w
k;n that
A w
22 FREDERICK M. GOODMAN AND SERGEI V. KEROV
for f in Rn , where h\Delta; \Deltai denotes the inner product on R with respect to which the Okada
s-functions form an orthonormal basis. Recall that the operators U and D are conjugate
with respect to this inner product. Consequently,
A w
since hUA w
in Rn .
Corollary 7.5. The functionals ' fi;w for fi ? 0 and w summable are pairwise distinct,
and different from the Plancherel functional 'P .
Proof. Suppose that w is a summable word and that A is the set of positions of 2's in
w. We set
Y
Then for each k - 2 and fi ? 0,
is zero if and only if k 2 A. In particular ' fi;w 6= 'P , by Lemma 7.1, and moreover,
A n f1g is determined by the sequence of values ' fi;w (p 21 k\Gamma2 2. It is also clear
that ' fi;w (p 2 negative hence the set A and therefore also w are
determined by the values ' fi;w (p 21 k Finally, fi is determined by
for any k 62 A.
Proposition 7.6. For each summable infinite Fibonacci word w and each fi, 0 - fi - 1,
there exists a sequence v (n) of finite Fibonacci words such that '
Proof. If choose the
sequence r n so that
lim
Then, in every case,
lim
Let wn be the finite word consisting of the rightmost n digits of w, put s
and
Fix j. Suppose that u 0 has 2's at positions
1. Using the product formula for / v (n) , one
obtains
The first factor converges to 'w (p u ), so it suffices, by Proposition 7.4, to show that the
second factor converges to fi k . The second factor reduces to
Using the well-known fact that
lim
one obtains that the ratio of gamma functions is asymptotic to
which, by our choice of r n , converges to fi k , as desired.
This proposition shows that 'P as well as all of the harmonic functions ' fi;w are contained
in the Martin boundary of the Young-Fibonacci lattice. In the following sections,
we will show that these harmonic functions make up the entire Martin boundary.
x8. Regularity conditions
In this Section we obtain a simple criterion for a sequence of characters of finite dimensional
Okada algebras to converge to a character of the limiting infinite dimensional
. Using this criterion, the regularity conditions, we show that the
harmonic functions provided by the formulae (7.3) and (7.4) make up the entire Martin
boundary of the Young-Fibonacci graph YF. Technically, it is more convenient to work
with linear functionals on the spaces Rn and their limits in
with traces on F . As it was already explained in Section 5, there is a natural one-to-one
correspondence between traces of Okada algebra Fn , and positive linear functionals on
the space Rn .
In this Section we shall use the following elementary inequalities:
d
for every pair of positive integer numbers d - 2 and k;
d
and
d
d
for every pair of integers d ? k. We omit the straightforward proofs of these inequalities.
Convergence to the Plancherel measure.
We first examine the important particular case of convergence to the Plancherel
character 'P .
We define the function - of a finite or summable word v by
Y
where the d j are the positions of the 2's in v. We also recall that for each k - 2 the
function - k was defined as
Y
Note that if is a summable word, then ' v (p u according
to Equation (7.3).
Proposition 8.1. The following properties of a sequence wn 2 YF, are
equivalent:
(i) The normalized characters /wn converge to the Plancherel character, i.e.,
lim
/wn (p u
for each
(ii) lim
for every
(iii) lim
-(wn
The proof is based on the following lemmas.
Lemma 8.2. For every finite word v 2 YF, and for every of essential rank
Proof. Let indicate the positions of 2's in the word u, and let d
the positions of 2's in v. The essential rank of u can be written as
so that
Y
By the product formula,
Y
since none of the factors in the product exceed 1. In fact, j1
Lemma 8.3. For each and for every word v 2 YF,
Proof. It follows from (8.1) that
Y
Y
Y
Y
2, the last product can be estimated as
Y
and the lemma follows.
26 FREDERICK M. GOODMAN AND SERGEI V. KEROV
Lemma 8.4. If d 1 (v) 6= 2, then
Proof. We apply the inequality (8.2). If d 1 (v) - 3, then
Y
Y
by the inequality (8.2). If d 1 \Gamma1, and since d 2 - 3, the
inequality holds in this case as well.
In case of d 1
so that the second inequality of Lemma also follows from (8.2).
Proof of Proposition 8.1. The implication (i) ) (ii) is trivial, since - k
a particular character value for . The statement (iii) follows from (ii) by
Lemma 8.4. In fact, we can split the initial sequence fwng into two subsequences, fw 0
and fw 00
n g, in such a way that d 1 (w 0
2. Then we derive from Lemma
8.4 that for both subsequences -(wn
(ii) follows from (iii) by Lemma 8.3, and (ii) implies (i) by Lemma 8.2.
General regularity conditions.
We now find the conditions for a sequence of linear functionals on the spaces Rn to
converge to a functional on the limiting space
Definition (Regularity of character sequences). Let /n be a linear functional on the
graded component Rn of the ring for each assume that
the sequence converges pointwise to a functional ' on the ring R, in the sense that
for every m 2 N and every polynomial P 2 Rm . We call such a sequence regular.
Our goal in this Section is to characterize the set of regular sequences.
Definition (Convergence of words). Let fwng be a sequence of finite Fibonacci words,
and assume that the ranks jwn j tend to infinity as n !1. We say that fwng converges
to an infinite word w, iff the mth letter wn (m) of wn coincides with the mth letter w(m)
of the limiting word w for almost all n (i.e., for all but finitely many n's), and for all m.
Let us recall that an infinite word w with 2's at positions d
and only if, the series
equivalently, if the product
Y
converges.
Consider a sequence w words converging to a summable infinite
word w. We denote by w 0
n the longest initial (rightmost) subword of wn identical with
the corresponding segment of w, and we call it stable part of wn . The remaining part
of wn will be denoted by w 00
n , and referred to as transient part of wn .
Definition (Regularity Conditions). We say that a sequence of Fibonacci words wn 2
YFn satisfies regularity conditions, if either one of the following two conditions holds:
(i) lim
-(wn
(ii) the sequence wn converges to a summable infinite word w, and a strictly positive
limit
exists.
Theorem 8.5. Assume that the regularity conditions hold for a sequence wn 2 YFn .
Then the character sequence /wn is regular. If the regularity condition (i) holds, then
lim
/wn (QX n\Gammam
and if regularity condition (ii) holds, then
/wn (QX n\Gammam
for every polynomial Q Conversely, if the character sequence /wn
is regular, then the regularity conditions hold for the sequence wn 2 YFn .
This theorem will follow from Proposition 8.1 and the following proposition:
Proposition 8.6. Assume that a sequence w words converges to
a summable infinite word w, and that there exists a limit
-(wn
Then
for every generally,
/wn (p u
28 FREDERICK M. GOODMAN AND SERGEI V. KEROV
for every element u of essential rank
Proof. Let mn be the length of the stable part of the word wn , and note that mn !1.
In the following ratio, the factors corresponding to 2's in the stable part of wn cancel
out,
-(wn
Y
and a similar formula holds for the functional - k ,
Y
Consider the ratio
Y
The second product in the right hand side is a tail of the converging infinite product
(since the word w is summable), hence converges to 1, as n ! 1. By 8.3, the first
product can also be estimated by a tail of a converging infinite product,
Y
Y
j:d?mn
hence converges to 1, as well.
The proof of the formula (8.10) is only different in the way that the ratio in the left
hand side of (8.11) should be replaced by /wn (p u )='w (p u ).
Proof of the Theorem 8.5. It follows directly from Propositions 8.1 and 8.6 that the
regularity conditions for a sequence w imply convergence of functionals /wn to
the Plancherel character 'P in case (i), and to the character ' fi;w in case (ii). By the
Corollary 7.5 we know that the functions ' fi;w are pairwise distinct, and also different
from the Plancherel functional 'P .
Let us now assume that the sequence /wn converges to a limiting functional '. We
can choose a subsequence /wnm in such a way that the corresponding sequence wmn
converges digitwise to an infinite word w. If w is not summable, then
with the Plancherel functional, and the part (i) of the regularity conditions holds. Oth-
erwise, we can also assume that the limit (8.6) exists, and hence
8.6. Since the parameter fi and the word w can be restored, by Corollary 7.5, from the
limiting functional ', the sequence wn cannot have subsequences converging to different
limits, nor can the sequence -(wn ) have subsequences converging to different limits. It
follows, that the regularity conditions are necessary. The Theorem is proved.
In the following
statement,\Omega refers to the space defined at the end of Section 3.
Theorem 8.7. The map
is a homeomorphism
of\Omega onto the Martin boundary of YF.
Proof. It follows from Corollary 7.5 that the map is an injection
of\Omega into the Martin
boundary, and from Theorem 8.5 that the map is surjective. Furthermore, the proof
that the map is a homeomorphism is a straightforward variation of the proof of the
regularity statement, Theorem 8.5.
x9. Concluding remarks
The Young-Fibonacci lattice, along with the Young lattice, are the most interesting
examples of differential posets. There is a considerable similarity between the two
graphs, as well as a few severe distinctions.
Both lattices arise as Bratteli diagrams of increasing families of finite dimensional
semisimple matrix algebras, i.e., group algebras of symmetric groups in case of Young
lattice, and Okada algebras in case of Young-Fibonacci graph. For every Bratteli dia-
gram, there is a problem of describing the traces of the corresponding inductive limit
algebra, which is well-known to be intimately related to the Martin boundary construction
for the graph. The relevant fact is that indecomposable positive harmonic functions,
which are in one-to-one correspondence with the indecomposable traces, form a part of
the Martin boundary.
For the Young lattice the Martin boundary has been known for several decades,
and all of the harmonic functions in the boundary are known to be indecomposable
(extreme points). In this paper we have found the Martin boundary for the Young-
Fibonacci lattice. Unfortunately, we still do not know which harmonic functions in the
boundary are decomposable (if any). The method employed to prove indecomposability
of the elements of the Martin boundary of the Young lattice can not be applied to
Young-Fibonacci lattice, since the K 0 -functor ring R of the limiting Okada algebra F
is not commutative, as it is in case of the group algebra of the infinite symmetric group
(in this case it can be identified with the symmetric function ring).
Another natural problem related to Okada algebras is to find all non-negative Markov
traces. We plan to address this problem in another paper.
Appendix
In this appendix, we survey a few properties of differential posets introduced by R.
Stanley in [St1-3]. A more general class of posets had been defined by S. Fomin in
[F1-3]. In his terms differential posets are called self - dual graphs.
A.1 Definitions.
A graded poset
called branching diagram (cf. [KV]), if
(B1) The set \Gamma n of elements of rank n is finite for all
(B2) There is a unique minimal element
There are no maximal elements in \Gamma.
One can consider a branching diagram as an extended phase space of a non - stationary
being the set of admissible states at the moment n and covering
relations indicating the possible transitions.
We denote the rank of a vertex the number of saturated
chains in an interval [u; v] ae \Gamma by d(u; v).
Following [St1], we define an r-differential poset as a branching diagram \Gamma satisfying
two conditions:
(D1) If u 6= v in \Gamma then the number of elements covered by u and v is the same as
the number of elements covering both u and v
exactly k elements, then v is covered by exactly
of \Gamma.
Note that the number of elements in a differential poset covering two distinct elements
can be at most 1. In this paper we focus on 1-differential posets.
For any branching diagram \Gamma one can define two linear operators in the vector space
Fun(\Gamma) of functions on \Gamma with coefficients in Q: the creation operator
w:v%w
and the annihilation operator
u:u%v
finitely supported functions on \Gamma with formal linear combinations of points
of \Gamma and vertices of \Gamma with the delta functions at the vertices, one can write instead:
w:v%w
u:u%v
u:
One can characterize r-differential posets as branching diagrams for which the operators
U; D satisfy the Weyl identity DU \Gamma
A.2 Some properties of differential posets.
We review below only a few identities we need in the main part of the paper. For a
general algebraic theory of differential posets see [St1-3], [Fo1-3]. Assume here that \Gamma
is a 1-differential poset.
The first formula is well known:
w:v%w
Proof. Let
can be written
as . This is trivial for assuming D
Our next result is a generalization of (A.2.1).
Lemma A.2.2. Let \Gamma be a 1-differential poset, and let u - v be any vertices of ranks
w:v%w
Proof. Using the notation dn
jvj=n d(u; v) v, one can easily see that U dn
dn+1 (u) and that (A.2.3) can be rewritten in the form
For the formula is true by the definition of D. By the induction argument,
Note that (A.2.3) specializes to (A.2.1) in case
A.3 Plancherel transition probabilities on a differential poset.
It follows from (A.2.1) that the numbers
can be considered as transition probabilities of a Markov chain on \Gamma. Generalizing the
terminology used in the particular example of Young lattice (see [KV]), we call (A.3.1)
Plancherel transition probabilities.
Lemma A.3.2. Let u - v be the vertices of ranks in a 1-differential
poset \Gamma. Then the Plancherel probability p(u; v) to reach (by any path) the vertex v
starting with u is
Proof. We have to check that
d(u; w), we obtain
jvj=n
and the Lemma follows.
--R
Dimensions and C
Generalized Robinson - Schensted - Knuth correspondence
Duality of graded graphs
Schensted algorithms for dual graded graphs
Algebras associated to the Young-Fibonacci lattice
Variations on
Further combinatorial properties of two Fibonacci lattices
Discrete potential theory and boundaries
The Grothendieck Group of the Infinite Symmetric Group and Symmetric Functions with the Elements of the K 0
2nd edition
Asymptotic character theory of the symmetric group
Thoma's theorem and representations of infinite bisymmetric group
Analysis and its Appl.
The Boundary of Young Graph with Jack Edge Multiplicities
Department of Mathematics
--TR
Further combinatorial properties of two Fibonacci lattices
Schensted Algorithms for Dual Graded Graphs
--CTR
Jonathan David Farley , Sungsoon Kim, The Automorphism Group of the Fibonacci Poset: A Not Too Difficult Problem of Stanley from 1988, Journal of Algebraic Combinatorics: An International Journal, v.19 n.2, p.197-204, March 2004 | differential poset;okada algebra;harmonic function;non-commutative symmetric function;martin boundary |
335045 | Structural gate decomposition for depth-optimal technology mapping in LUT-based FPGA designs. | In this paper we study structural gate decomposition in general, simple gate networks for depth-optimal technology mapping using K-input Lookup-Tables (K-LUTs). We show that (1) structural gate decomposition in any K-bounded network results in an optimal mapping depth smaller than or equal to that of the original network, regardless of the decomposition method used; and (2) the problem of structural gate decomposition for depth-optimal technology mapping is NP-hard for K-unbounded networks when K3 and remains NP-hard for K-boundeds networks when K5. Based on these results, we propose two new structural gate decomposition algorithms, named DOGMA and DOGMA-m, which combine the level-driven node-packing technique (used in FlowMap) and the network flow-based labeling technique (used in Chortle-d) for depth-optimal technology mapping. Experimental results show that (1) among five structural gate decompostion algorithms, DOGMA-m results in the best mapping solutions; and (2) compared with speed_up(an algebraic algorithm) and TOS (a Boolean approach), DOGMA-m completes, decomposition of all tested benchmarks in a short time while speed_up and TOS fail in several cases. However, speed_up results in the smallest depth and area in the following technology mapping steps. | Introduction
The field programmable gate arrays (FPGAs) have been widely used in circuit design
implementation and system prototyping for the advantages of short design cycles and low non-recurring
engineering cost. An important class of FPGAs use lookup-tables (LUTs) as the basic logic elements. A
K-input LUT (K-LUT) which consists of 2 K SRAM cells can store the truth table of arbitrary Boolean
function of up to K variables. By connecting LUTs into a network, LUT-based FPGAs can be used to
implement circuit designs in a short time.
Logic synthesis for LUT-based FPGAs transforms networks of logic gates into functionally
equivalent LUT networks. The process is usually divided into two tasks: logic optimization and
technology mapping. Logic optimization extracts common subfunctions to reduce the circuit size and/or
resynthesizes critical paths to reduce the circuit delay. Technology mapping consists of two subtasks:
gate decomposition and LUT mapping. In gate decomposition, large gates are decomposed into gates of
at most K inputs (that is, K-bounded). The resulting K-bounded network is then mapped onto (i.e.,
covered by) K-LUTs in the LUT mapping step. The separation of optimization and mapping tasks is
artificial. Some LUT synthesis algorithms (e.g. [LaPP94] and [WuEA95]) decompose collapsed networks
into LUT networks directly. The objectives of these tasks include area or delay minimization, routability
maximization, or a combination of them. A comprehensive survey of gate decomposition, LUT mapping,
and logic synthesis algorithms for LUT-based FPGAs can be found in [CoDi96].
The delay of an LUT network can be measured by the number of levels (or depth) in the network
under the unit delay model. A number of algorithms were proposed in the past for delay-oriented LUT
mapping. We classify them into two classes. The first class of algorithms, such as Chortle-d [FrRV91b],
DAG-Map [ChCD92], and FlowMap [CoDi94a], perform LUT mapping without logic resynthesis.
Among these algorithms, Chortle-d guarantees depth-optimal technology mapping for simple-gate tree
networks and FlowMap guarantees depth-optimal LUT mapping for general K-bounded networks.
Following FlowMap, FlowMap-r [CoDi94b] and CutMap [CoHw95] further reduce the mapping area,
and FlowMap-d [CoDi94c] and Edge-Map [YaWo94] minimize delay under a more accurate net delay
model. Another class of LUT mapping algorithms, such as MIS-pga-delay [MuSB91], TechMap-D
[SaTh93], FlowSyn [CoDi93], and ALTO [HuJS96] collapse critical paths followed by delay-oriented
logic resynthesis. Due to resynthesis, this class of algorithms could obtain mapping depth smaller than
the optimal depth computed by FlowMap, but usually with longer computation time.
Gate decomposition may affect significantly the network depth obtained by the algorithms in the
first LUT mapping class. For example, the network in Figure 1(a) is not a K-bounded network for
When node v is decomposed as shown in Figure 1(b), any mapping algorithm will result in a depth of 3 or
larger. But if node v is decomposed in the way shown in Figure 1(c), a mapping solution with a depth of
2 can be obtained. In addition, when a K-bounded network is further decomposed, the mapping depth
could be reduced. Figure 2(a) shows a 3-bounded network. For produces a 3-level
mapping solution of five LUTs. (Every shaded square represents an LUT in the figure.) But if node v is
further decomposed, FlowMap produces a 2-level network of four LUTs (Figure 2(b)). The two
examples demonstrate that gate decomposition affects the depth obtained by LUT mapping algorithms.
We classify gate decomposition methods into structural, algebraic, or Boolean approaches.
Structural gate decomposition can only be applied to simple gates (e.g., AND gates, OR gates, XOR
gates). Complex gates need to be transformed into simple gates (e.g., via AND-OR decomposition)
before any structural decomposition. The tech_decomp algorithm in SIS [SeSL92], the dmig algorithm
[Wa89, ChCD92], and the Chortle family of mapping algorithms [FrRC90, FrRV91a, FrRV91b] all
perform structural gate decomposition. In algebraic gate decomposition approaches, networks are
(a) (b) (c)
Figure
1 Impact of gate decomposition on mapping depth for
(a) Initial network. (b) A decomposition resulting in a mapping depth of 3.
(c) A decomposition resulting in a mapping depth of 2.
(a) Before Decomposition (b) After Decomposition
Figure
Gate decomposition in a K-bounded network Initial K-bounded network
with a mapping depth of 3. (b) Decomposed network with a mapping depth of 2.
usually partially collapsed and gates are represented in the sum-of-product (SOP) form. Common logic
subfunctions are then extracted with algebraic divisions [Ru89, De94]. The speed_up algorithm in SIS
[SeSL92] is an algebraic approach which collapses critical paths followed by network resynthesis for
delay minimization. In Boolean gate decomposition approaches, logic gates are decomposed via
functional operations. Shannon expansion, if-then-else (ITE) decomposition, and AND-OR
decomposition are very common Boolean gate decomposition operations. Recently, functional
decomposition techniques [AS59, Cu61, RoKa62] are used in a number of LUT network synthesis
algorithms [LaPP94, WuEA95, LeWE96]. In these algorithms, networks are completely collapsed
whenever possible for the outputs to be represented as functions of the network inputs directly. The
output functions are then decomposed into composed K-input subfunctions for implementation using K-
LUTs. Optional LUT mapping steps may follow to improve the synthesis results. The FGSyn algorithm
[LaPP94] and the BoolMap-D algorithm [LeWE96] take this approach for delay-oriented LUT network
synthesis. Generally speaking, algebraic approaches and Boolean approaches are more effective for both
area and delay minimization in technology mapping while structural approaches are usually faster.
Hybrid approaches such as algebraic decompositions followed by structural decompositions are used in
many logic synthesis approaches.
In this paper, we study structural gate decomposition for delay minimization in general networks
with the following motivations. First, we have shown that how gates are decomposed can affect the
mapping depth computed by FlowMap. A good gate decomposition step allows mapping algorithms to
obtain the smallest mapping depth. Second, structural gate decomposition allows arbitrary grouping of
gate inputs for our optimization objective while algebraic or Boolean approaches do not have this
advantage. Third, structural gate decomposition is computationally efficient. This is an important factor
for mapping large designs and estimating the mapping delay or area. Nowadays, the IC process
technology has advanced to 0.25 -m and below. Million-gate FPGAs have become a reality. Structural
gate decomposition algorithms can be employed in the technology mapping approaches along with this
technology trend.
Several delay-oriented structural gate decomposition algorithms were proposed in the past. The
tech_decomp algorithm [SeSL92] decomposes each simple gate into a balanced fanin tree to minimize
the number of levels locally. The dmig algorithm [Wa89, ChCD92] is based on the Huffman coding
algorithm and guarantees the minimum depth in the decomposed network. However, the mapping depth
might not be the minimum. The network in Figure 1(b) is actually decomposed using dmig and results in
a suboptimal mapping depth. The Chortle-d algorithm [FrRV91b] employs bin packing heuristics to
achieve depth minimization, but is optimal for trees only. In this paper, we go one step further. We shall
develop structural gate decomposition algorithms for depth-optimal technology mapping on general
networks.
The rest of this paper is organized as follows. Section 2 defines the terminology, presents general
properties and formulates the structural gate decomposition problems. Section 3 addresses the NP-completeness
of the problems. Section 4 presents two new algorithms, named DOGMA and DOGMA-
m, for structural gate decomposition. Experimental results are presented in Section 5 and Section 6
concludes the paper. A preliminary version of this work was published in DAC'96 [CoHw96] without
the proofs of theorems and considered only single-gate decompositions.
2. Problem Formulation
2.1. Definitions and Preliminaries
A combinational Boolean network N can be represented by a directed acyclic graph
where each node v -V represents a logic gate and each directed edge (u,v) -E represents a connection
from the output of node u to the input of node v. A node v is a simple gate if v implements one of the
following functions: AND, OR, XOR, or their inversions. Primary inputs (PIs) are nodes of in-degree
zero. Other nodes are internal nodes among which some are designated as primary outputs (POs). A
node v is a predecessor of a node u if there is a directed path from v to u in N. The depth of a node v is
the number of edges on the longest path from any PI to v. Each PI has a depth of zero. The depth of a
network is the largest depth for nodes in the network. Let input (v) and fanout (v) represent the set of
fanins and the set of fanouts of node v, respectively. Given a subgraph H of N, let input (H) denote the set
of distinct nodes outside H that supply inputs to nodes in H. A fanin cone C v rooted at v is a connected
subnetwork consisting of v and its predecessors. Node v is the root node of C v , and is denoted as
root K be the LUT input size. A node v is K-bounded if | input (v) | -K. Otherwise, v is K-
unbounded. A network N is K-bounded if it contains only K-bounded nodes.
Given a K-bounded network N, a set { of subnetworks is a K-LUT mapping
solution of N if
(C1) for every L i -M, L i is a fanin cone in N and | input (L i ) | -K,
(C2) for every L i -M, input (L i ) contains only PIs or root nodes of other subnetworks in M,
(C3) for every L i -M, root (L i ) is either a PO or belongs to input (L j ) for some L j -M, and
(C4) for every PO v of N,
A mapping solution M is duplication free if L i -L j =- for all L i -L j in M. By implementing every
subnetwork in M using a K-LUT, we obtain a K-LUT network which is functionally equivalent to N. The
mapping area and the mapping depth of M is the LUT count (i.e., | M | ) and the depth in the K-LUT
network that implements M, respectively.
Given a K-bounded network N, let S K (N) represent the set of K-LUT networks that implement all
mapping solutions of N. The minimum mapping depth of N, denoted as MMD (N), is the minimum
network depth for all K-LUT networks in S K (N). Let N v represent the largest fanin cone rooted at v in N.
The minimum mapping depth of a node v -N, denoted as MMD N (v), is MMD (N v ). The mapping depth
of any PI is 0. Given a K-bounded network N, the FlowMap algorithm [CoDi94a] computes MMD N (v)
for every node v -N in polynomial time. A cut in N v is a partition (X v , X
v ) of N v such that X
v is a fanin
cone rooted at v and X v is N v -X
v . The cutset of the cut, denoted as n (X v , X
v ), is defined as input (X
The cut is K-feasible if | n (X v , X
-K. The height of the cut, denoted as height (X v , X
v ), is
v )}. FlowMap computes a min-height K-feasible cut in the fanin cone of
each node v to obtain MMD N (v).
The following two lemmas are on the minimum mapping depth in general networks. Lemma 1
states the monotone property of minimum mapping depth and Lemma 2 gives a way to compute
MMD N (v).
be a K-bounded network and let node v -V. Then
MMD N (u) -MMD N (v) for every fanin u - input (v).
be a K-bounded network, node v -V, and let
there exists a K-feasible cut of height p - 1 in
N v . Otherwise, MMD N
2.2. Properties of Structural Gate Decomposition
Simple gates allow arbitrary grouping of their fanins in decomposition. However, the grouping and
the resulting gate size in decomposition can affect significantly the depth and area in the final mapping
solution. In this subsection, we shall show that the best mapping results can only be obtained from
completely decomposed networks.
Let node v be a simple gate in a network N and let | input (v) | - 3. Given a structural gate
decomposition algorithm D, a decomposition step D v on node v (i) chooses two fanins u 1 and u 2 of v, (ii)
removes edges introduces a node w and three edges
to re-connect u 1 , u 2 and v. Because v is a simple gate, D v can always be applied. Node w has the same
gate type as node v. For any subnetwork N- = (V-,E-) of N and a decomposition step D v , we define
{ w}, E- {
if v -V-, and D v (N-) =N- if v -/ N-. A network is completely decomposed when it becomes 2-bounded. In
Figure
3(a), N- contains nodes u 1 and v with input { a,b,u 2 , c,d}. Figure 3(b) shows D v (N-) after
one decomposition step D v . The subnetwork is completely decomposed in Figure 3(c). We have the
following theorem.
Theorem 1 Let be a K-bounded network, node v -V be a simple gate, and
| input (v) | - 3. Then S K (N) - S K (D v (N)) for any structural gate decomposition algorithm D.
(a) (c)
(b)
a
b c d
a
b c d a u 2
b c d
e
Figure
3 Decomposition of node v. (a) Before decomposition.
(b) D v (N) after one decomposition step D v . (c) Complete decomposition of v.
Proof Let w be the node introduced by D v . Let { be an arbitrary mapping
solution of N. We claim { D v (L 1 ),D v (L 2 ), . , D v (L m )} is a mapping solution of D v (N). First, N
and D v (N) have the same set of PIs and POs. From Figure 3, it should be clear that L i and D v (L i ) have
the same set of inputs as well as the same output node. As a result, M- satisfies conditions (C1) to (C4)
for being a mapping solution of D v (N). The K-LUT that implements L i also implements D v (L i ). Hence
the K-LUT network that implements M also implements M-. Therefore, S K (N) - S K (D v (N)). However, a
mapping solution M- of D v (N) can not be a mapping solution of N if w is the root node of some
subnetwork in M- (due to w -/ N). There exists at least one such mapping solution which is D v (N) itself.
As a result, S K (N) - S K (D v (N)). #
Corollary 1.1 Let be a K-bounded network, node v -V be a simple gate, and
| input (v) | - 3. Then MMD (D v (N)) -MMD (N) for any structural gate decomposition algorithm D.
Proof Since S K (N) - S K (D v (N)) for any decomposition algorithm D, by definition,
MMD (D v (N)) -MMD (N). #
Note that Theorem 1 and Corollary 1.1 hold as long as the decomposition step at v (structural,
algebraic, or Boolean) can be carried out regardless v is a simple gate or not. However, the algebraic or
functional decomposition for a complex gate may not always be possible.
Since the set of all possible functionally equivalent K-LUT networks expands whenever a simple
gate is decomposed (Theorem 1), it is always beneficial to decompose simple-gate networks into 2-
bounded networks for LUT mapping algorithms to exploit the larger mapping solution space. The
experimental results reported in [CoDi94a] confirm this conclusion. In their experiments, the input
networks were first transformed into simple gate networks and then decomposed structurally into 5-
bounded, 4-bounded, 3-bounded, or 2-bounded networks before for LUT mapping. The resulting
mapping depth decreases monotonically along with the decrease of gate sizes in decomposition. An
interesting contrast comes from the results reported in [LeEW96] where networks were first collapsed
completely and then decomposed functionally into 5-bounded, 4-bounded, or 3-bounded networks for
LUT mapping. The best mapping solutions in terms of area and depth are mostly from the 5-bounded
networks. The two experiments show an important difference between structural and functional
decompositions: logic signals are preserved in structural decompositions while new gates are synthesized
during functional decompositions. In [LeEW96], the 5-bounded, 4-bounded, and 3-bounded networks
contain totally different sets of internal gates which are synthesized independently in three functional
decomposition processes. In fact, according to Corollary 1.1, if the 5-bounded networks in [LeEW96]
were further decomposed for LUT mapping, even smaller mapping depth could be obtained in their
experiments.
The following lemma specifies a condition where the structural gate decomposition will not cause
further mapping depth reduction.
be a K-bounded network, node v -V be a simple gate, and | input (v) | - 3.
Assume that nodes u 1, Figure 4(a)). Let D v be the
decomposition step which merges u 1 , u 2 into an intermediate node w (see Figure 4(b)). Then
MMD (N) =MMD (D v (N)).
Proof Assume MMD N
)). Next,
assures that p-MMD D v (N) (w) -MMD D v (N) (v). Then, according to
Corollary 1.1, we have MMD D v (N) (v) -MMD N
Figure
4(b)). Now we show MMD (D v (N)) =MMD (N). Suppose this is not the case. Then
MMD (D v (N)) <MMD (N) and there exists a mapping solution { (N) such that
M has a depth smaller than MMD (N). Let x i represent the output node of each K-feasible subnetwork L i
in M. First, would be a mapping solution of N (by collapsing w into v)
and MMD (D v (N)) would not be smaller than MMD (N). Next, there must exist some x i such that
We call node x i a depth-reduced node. There are two cases for any depth-
reduced node x i . (i) w -/ input (L i ). Then we can find another node x j - input (L i ) such that
won't be a depth-reduced node. We continue to trace
depth-reduced nodes towards PIs. This tracing, however, won't reach PIs since PIs have a depth of 0. At
certain depth, the second case must happen. (ii) w - input (L i ). Then (N- x -L i , L i ) is a cut in the fanin
cone N- x i
in D v (N) (see Figure 4(c)). But we can move node v from L i to N- x -L i and obtain another K-
feasible cut of height p in N- x i
(see
Figure
4(d)), since w is fanout-free and w and v have the same
mapping depth p. This implies MMD D v (N) As a result, x i is not a depth-reduced node.
Contradiction. So we proved MMD (D v (N)) =MMD (N). #
Lemma 4 Let be a K-bounded network, node v -V be a simple gate, and | input (v) | - 3.
If MMD N (D v (N)) for any structural
gate decomposition algorithm D.
Proof Since the intermediate node w has the same depth as node v, this lemma is true according to
Lemma 3. #
(a) (b)
(c) (d)
Figure
4 (a) Before D v . (b) After D v . (c) w - input (L i ). (d) v is moved out of L i .
2.3. Integrated versus Two-step Technology Mapping
Gate decomposition and LUT mapping can be performed in two different ways. In an integrated
mapping approach, the input network is decomposed and covered by LUTs simultaneously, while in a
two-step mapping approach, the input network is decomposed into a K-bounded network before LUT
mapping is performed. For example, Chortle-d is an integrated mapping approach while FlowMap fits
only into a two-step mapping approach. The separation of gate decomposition and LUT mapping is a
restriction in general since integrated approaches allow more informative gate decomposition and LUT
mapping decisions while two-step approaches do not have this advantage. It may appear that the
minimum mapping depth for all integrated mapping approaches will be smaller than the minimum
mapping depth for all two-step mapping approaches. However, we show that this is not the case for
structural gate decomposition.
Theorem 2 Given a K-bounded network N, if only structural gate decomposition is allowed, the
minimum mapping depth for all integrated mapping approaches equals to the minimum mapping depth
for all two-step mapping approaches.
Proof Given an arbitrary K-bounded network N, assume some integrated approach results in the
optimal depth MMD (N) in a mapping solution M N . Then M N is a mapping solution of some K-bounded
network N- decomposed structurally from N. A depth-optimal mapper (e.g. FlowMap) can take N- as
input and generate a mapping solution M N- . Since M N- is depth-optimal with respect to N-, we have
MMD (N-MMD (N). But M N is depth optimal with respect to N, As a result, MMD (N) -MMD (N-).
Therefore, MMD (N) =MMD (N-). #
Our mapping algorithms to be presented in Section 4 should be considered a hybrid approach. On
one hand, depth minimization is achieved in structural gate decomposition (by DOGMA or DOGMA-m)
to return a network topology of the minimum mapping depth; On the other hand, the LUT mapping
solution is computed in depth-optimal LUT mapping with area minimization as a second objective. As a
result, the depth and the area are optimized separately in the two steps of technology mapping. Therefore,
we consider our algorithm as a hybrid approach.
2.4. The SGD/K and K-SGD/K Problems
In this paper, we study structural gate decomposition of K-bounded or K-unbounded simple-gate
networks into 2-bounded networks such that LUT mapping algorithms (e.g., FlowMap) can obtain the
smallest mapping depth. We formulate the following two problems.
Structural Gate Decomposition for K-LUT Mapping (SGD/K) Given a simple-gate K-
unbounded network N - , decompose N - into a 2-bounded network N 2 such that MMD (N 2
for any other 2-bounded decomposed network N- 2 of N - .
Structural Gate Decomposition in K-bounded Network for K-LUT Mapping (K-SGD/K)
Given a simple-gate K-bounded network N K , decompose N K into a 2-bounded network N 2 such that
other 2-bounded decomposed network N- 2 of N K .
3. Complexity of SGD/K and K-SGD/K Problems
We shall show the following results: (1) The SGD/K problem is NP-hard for K - 3; and (2) the K-
SGD/K problem is NP-hard for K - 5. We shall present the construction for the NP-Complete reduction,
the lemmas and theorems, and the proofs for theorems. Proofs for lemmas can be found in the Appendix.
Our results are based on polynomial-time transformations from the 3SAT problem to the decision
version of the SGD/K and the K-SGD/K problems. The 3SAT problem, which is a well-known NP-Complete
problem [GaJo79], is defined as follows.
Problem: 3-Satisfiability (3SAT)
Instance: A set of Boolean variables collection of m clauses
each clause is the disjunction (OR) of 3 literals of the variables and
(ii) each clause contains at most one of x i and x #
i for any variable x i .
Question: Is there a truth assignment for the variables in X such that C
We shall transform an arbitrary instance of 3SAT to an instance of SGD/K in polynomial time. The
idea is to relate the truth assignment of variables in 3SAT to the decision of gate decomposition in
SGD/K. Since determining the truth assignment is difficult, the decision of gate decomposition is also
difficult. We define the decision version of the SGD/K problem as follows.
Problem: Structural Gate Decomposition for K-LUT Mapping (SGD/K-D)
Instance: A constant K - 3, a depth bound B, and a simple-gate K-unbounded network N - .
Question: Is there a way to structurally decompose N - into a 2-bounded network N 2 such that the
depth-optimal K-LUT mapping solution of N 2 has a depth no more than B?
Given an instance F of 3SAT with n variables x 1 , x 2 , . , x n and m clauses C i , C 2 , . , C m , we
construct an K-unbounded network N (F) corresponding to the instance F as follows.
First, for each variable x i , we construct a subnetwork N which consists of the following nodes:
(a) two output nodes denoted as x i and x
nodes in which two of them are denoted as
1 and PI i
nodes, denoted as v i
respectively; The nodes are connected as shown in Figure 5. Each node of w i
2 has K - 1 PI fanins.
Node s i has 4 fanins from w i
1 and PI i
. Every other internal node has K PI fanins. Note that
well-defined for K - 3 and is K-bounded for
Next, for each clause C j with 3 literals l j
3 , we construct a subnetwork N (C j ) which consists of
the following nodes: (a) one output node denoted as C j ; (b) three literal nodes denoted as l j
2K-5 , each of them being the root of a complete 2-level K-ary tree with
PI nodes as leaves; (d) (K - 2) internal nodes r j
K-2 , each of them being the root of a complete 3-
level K-ary tree with PI nodes as leaves. The connections are shown in Figure 6(a). The output node C j
has all internal nodes as its fanins in N (C j ). Note that N (C j ) is well-defined for K - 3. However, the
output node C j is not K-bounded.
K PI's
K PI's
K-1 PI's K-1 PI's
K PI's K PI's
K-3
K-2
Figure
5 Construction of network N
r j3
K-2
l j
r j3
K-2
l j
(a) (b)
Figure
6 (a) Construction of network N (C j ) for each clause C j .
(b) Exact 2K nodes of depth 2 appear when MMD (l j
2.
Finally, we connect the subnetworks N (C j with the subnetworks N
as follows to form the network N (F). Let literal l j
k be a literal in clause C j . If l j
x i is a variable, we connect node x i in N as the single fanin of node l j
k in N (C j ). Similarly, if l j
we connect node x #
i in N as the single fanin of node l j
k in N (C j ). Note that every literal node has
exactly one fanin. This fanin node is called the variable node of the corresponding literal node. Network
N primary outputs: nodes C 1 , . , C m . We illustrate the construction of N (F) by an example.
Assume
The network N (F) is shown in Figure 7. Because
clause
as fanins to nodes l 1
3 in N (C 1 ), respectively.
is the variable node of node l 1
1 . We have the following lemma.
Lemma 5 The 3SAT instance F is satisfiable if and only if N (F) can be decomposed into D (N
such that MMD (D (N
Theorem 3 The SGD/K problem is NP-hard for K - 3.
Proof The transformation from an instance F of 3SAT to the network N (F) takes O (K 3 (n +m))
time. If the SGD/K-D problem can be solved in polynomial time, we can set solve 3SAT in
polynomial time. So the SGD/K-D problem is NP-hard. For a given decomposed network D (N (F)) of
takes polynomial time to compute its mapping depth d and verify whether d -B (e.g. by
FlowMap). So the SGD/K-D problem is NP-Complete. Since N are well-defined forl 1l 1 3
l 1l 2l 2l 2l 3l 3l 3
Figure
7 The network N (F) for
the SGD/K-D problem is NP-Complete for K - 3. Hence the SGD/K problem is NP-hard for K - 3.
We now show the complexity of the K-SGD/K problem. In this construction of reduction, we must
have every node K-bounded (note that N (C j ) is not K-bounded in the previous construction). Given an
instance F of the 3SAT with n variables x 1 , x 2 , . , x n and m clauses C i , C 2 , . , C m , we construct a
corresponding K-bounded network N K (F) as follows. For each variable x i , construct subnetwork N
before (shown in Figure 5). However, for each clause C j , construct subnetwork N K (C consisting of (a)
one output node denoted C j ; (b) three literal nodes denoted as l j
K-5 , each of them is the root of a complete 2-level K-ary tree with PI nodes as leaves. The
subnetwork N K (C j ) is shown in Figure 8(a). Note that N K (C j ) is well defined and K-bounded for K - 5.
We connect subnetworks N according to the formula F as before to obtain the network
N K (F). We have the following lemma.
Lemma 6 The 3SAT instance F is satisfiable if and only if N K (F) can be decomposed into
D (N K (F)) such that MMD (D (N K
Theorem 4 The K-SGD/K problem is NP-hard for K - 5.
Proof The subnetwork N K-bounded for 4. The subnetwork N K (C j ) is K-bounded for
5. Based on similar arguments in the proof of Theorem 3, it is easy to see the K-SGD/K problem is
NP-hard for K - 5. #
4. Gate Decomposition Algorithms for Depth-Optimal Mapping
In this section, we combine the node packing technique in Chortle-d with the min-height K-feasible
cut technique in FlowMap in structural gate decomposition of simple-gate networks. Our objective is to
minimize the depth in the final mapping solution. We propose two algorithms. The first algorithm
decomposes logic gates independently as in most previous approaches; while the second algorithm
decomposes multiple gates simultaneously to exploit common fanins. The advantage of multi-gate
(a) (b)
l jl jl j3 C j2
l j
K-5
K-5
Figure
8 Construction of K-bounded subnetwork N K (C j ) for each clause C j .
decomposition can be seen in one example. Nodes a, b, . , f in Figure 9 are primary inputs. If nodes u
and v in Figure 9(a) are decomposed independently, we might obtain a network in Figure 9(b). For
the best mapping solution in this case will be a 3-level network of 6 LUTs. However, if nodes u and v are
decomposed together to exploit their common fanins c and d as shown in Figure 9(c), a 2-level network
of 4 LUTs can be obtained. Both the depth and the area are reduced in the mapping solution.
(a)
a b c d e
f
x
(b)
a b c d e
f
x
(c)
a b c d e
f
x
Figure
9 Multi-gate decomposition. (a) Initial network. (b) Single gate decomposition result.
(c) Multi-gate decomposition result. (Shaded nodes are LUT outputs.)
4.1. Single Gate Decomposition
We present our single gate decomposition algorithm DOGMA (Depth-Optimal Gate decomposition
for MApping) in this subsection. Given a simple-gate network N, DOGMA decomposes nodes in
topological order from PIs to POs. At each node v, DOGMA shall decompose and label v with the
number l (v) =MMD N (v) (v) where N (v) denotes the decomposed network. The set of fanins of label q in
input (v), denoted S q , is called a stratum of depth q. A K-feasible cut of height q - 1 exist for every node
in S q . A K-feasible cut of height q - 1 exists for a set B of nodes if such a cut exists for a node s created
with input (s) =B. DOGMA groups input (v) into strata according to their labels, and processes each
stratum in two steps.
(1) Starting from the stratum S q of the smallest depth, DOGMA partitions S q into a minimum number
of subsets such that there exists a K-feasible cut of height q - 1 for each subset of nodes. The
process is similar to packing objects into bins. Each bin has a size of K. The size of a node (also
called an object) is the size of its min-cut of height q - 1. A set of nodes can be packed into one bin
if their overall size is no larger than K. Such a bin is called a min-height K-feasible bin which
corresponds to a partitioned subset of S q . Note that the overall cut size for nodes in a set could be
smaller than the sum of their individual cut sizes.
(2) After partitioning S q into subsets (or min-height K-feasible bins), an intermediate node (also called
bin node) w i is created for each bin B i with input (w i ) =B i and is labeled l (w
is then created for each w i with input (b i { w i } and a label l (b buffer nodes are put
into the set S q +1 . Note that if some bin B i contains more than 2 nodes, bin node w i needs to be
further decomposed. However, according to Lemma 4, no matter how w i is decomposed, the
minimum mapping depth of the network does not change. DOGMA arbitrarily decomposes w i into
an unbalanced tree.
DOGMA repeats steps (1) and (2) for stratum S q +1 and so on until all strata have been processed.
The last bin node corresponds to node v. Note that buffer nodes are introduced only for the packing
process and will be removed when the decomposition is complete.
To determine if there exists a K-feasible cut of height q - 1 for a bin B i - S q of nodes, we compute a
max-flow in the flow network constructed as follows [CoDi94a]. (i) Create a sink node t with
input (t) =B i . (ii) Create a source node s that fanouts to all PIs in N t . (iii) Assign every edge in N t an
infinite flow capacity. (iv) Replace every node u -N t , except s and t, by a subgraph (V u
{
an infinite flow capacity if l otherwise, a unit flow capacity is assigned. (v) Finally,
compute a max-flow in the constructed flow network. The amount of flow f corresponds to the min-cut
size in the flow network. If f -K, there exists a min-cut of height q - 1 for the bin B i of nodes.
We illustrate DOGMA for 3. The output node v in Figure 10(a) is under decomposition.
Among the five fanins of v, b,c,d have labels l As
a result, S { b,c,d} and S { a,e}. According to DOGMA, b,c will be packed into one bin since a K-
feasible cut of height 1 exists for them, and d into another bin for a total of two (which is the minimum)
min-height K-feasible bins. Then bin nodes f and g with labels l (f buffer nodes h and i
with labels l are created for the two bins, respectively (see Figure 10(b)). DOGMA proceeds
to the stratum of depth 3. Two K-feasible cuts of height 2 are found for { a,h} and { i,e} respectively.
Again, bin nodes j and k with labels l nodes m and n with labels l
are created for the two bins, respectively. Nodes m and n are then packed into a bin which corresponds to
v (see
Figure
10(c)). Finally, nodes g, h, i, m and n are removed and node v is completely decomposed
with a label l
The following problem needs to be solved in DOGMA.
a
c d
e
a
c d
e
a
c d
e
(a) (b)
(c)
Figure
Decomposition of gate v by the DOGMA algorithm. (a) Before decomposition.
(b) b and c, d are packed into f and g. (c) a and h, i and e are packed into j and k.
Min-Height K-Feasible Bin Packing Problem Given a stratum S q of depth q, pack nodes in S q
into a minimum number of min-height K-feasible bins.
In our study, we developed three heuristics to solve the problem. The first-fit-decreasing (FFD) and
best-fit-decreasing (BFD) are two heuristics for the bin packing problem [HoSa78]. The FFD heuristic
sorts objects into a list of objects of decreasing sizes, indexes the bins 1,2,3,., then removes the object
from the list (in order) and puts it into the first bin that can accommodate it. The initial conditions on the
bins and objects in the BFD heuristic are the same as in the FFD heuristic. But BFD puts the object into
the bin that leaves the smallest empty space. For the min-height K-feasible bin packing problem, we
proposed two min-cut based heuristics, MC-FFD and MC-BFD, which are analogous to FFD and BFD
except that every object is a node whose size is defined to be the size of its min-cut of height q - 1. A set
of nodes can be packed into a K-feasible bin as long as their combined cut size is no larger than K. The
third heuristic is called maximal-sharing-decreasing (MC-MSD) which encourages sharing during
packing, i.e., the size of the min-cut for the packed nodes is smaller than the sum of their individual min-cut
sizes. The packing that produces the maximum sharing is considered the best-fit packing when MC-
MSD calls MC-BFD for a packing result.
Experimental results (Table 1) show very little difference between the three heuristics on the
mapping results (DOGMA followed by CutMap) for MCNC benchmarks. It indicates that the same
number of bins were obtained by the three heuristics in most cases. This could be due to the small bin
size in the experiment. We choose MC-FFD for its efficiency. The FFD heuristic is also used in
Chortle-d for packing nodes into bins. However, MC-FFD packs nodes according to the size of their
min-height K-feasible cut for better performance. With reconvergent fanouts in general networks, one
can not decide locally whether a set of nodes can be packed into one bin or not. For example, it is not
obvious that nodes e and i in Figure 10(b) can be packed into one bin. The MC-FFD heuristic employs
max-flow computation and can decide the packing feasibility correctly.
The time complexity of DOGMA is computed as follows. For every node v in the input network
create | input (v) | - 2 nodes. In total, there are
( | input (v) | - created. The min-height K-feasible cut
computation has a time complexity of O (K . | E | ) [CoDi94a] where K is the LUT input size, and is carried
out O ( | input (v) | 2 ) times in the worst case at each node v in the MC-FFD heuristic. Let d max be the
maximal fanin size for nodes in N. Then the time complexity of DOGMA is O (K . d max
. | E | 2 ). We can
reduce the time complexity of min-height cut computation to O (K . | constructing partial flow
networks only to a certain depth, where E p is the edge set of the partial flow network. Let E p_max
represent the edge set of the largest partial flow network constructed during decomposition. Then the
time complexity of DOGMA is reduced to O (K . d max
Bin packing heuristics in DOGMA
MC-FFD MC-BFD MC-MSD
Circuits D A D A D A
count 5
rot 7 267 7 267 7 267
too_large 5
C6288 22 724 22 724 22 724
des 5 965 5 965 5 965
total 171 7674 171 7674 171 7677
Table
1 Comparison of packing heuristics MC-FFD, MC-BFD, and MC-MSD in DOGMA.
4.2. Multiple Gate Decomposition
We present our multiple gate decomposition algorithm, named DOGMA-m, and illustrate the
procedure on the network shown in Figure 11(a) for 3. DOGMA-m is outlined in Figure 12.
We call the stratum of each node a local stratum. The union of all local strata of depth q is called
the global stratum of depth q. For each depth q, a node v is under decomposition if | input (v) | > 2 (i.e.,
not yet completely decomposed) and input (v) intersets with the global stratum of depth q. Starting from
the depth the nodes of the same gate type and also under decomposition will be decomposed
simultaneously. In Figure 11(a), nodes a, b, ., h all have a label of 1. Nodes x, y, and z are under
decomposition for 1. The local stratum of depth 1 is { a,b,c} for node x, { b,c,d,e, f } for node y, and
{ e, f , g,h} for node z, respectively. The global stratum of depth 1 is { a,b,c,d,e, f , g,h}.
(a)
e
a b c g h
f
d
x y z
(c)
e
a b c g h
f
d
z
(b)
e
a b c g h
f
d
y z
e
a b c g h
f
d
z
Figure
Multiple gate decomposition. (a) Initial network. (b) After one
(c) After two Completely decomposed network.
In initialization, buffers are created for PIs to supply inputs to the the rest of the network. PIs are
labeled 0 and buffers are labeled 1. In Figure 11(a), nodes a, b, . , h are buffers PI buffers. Gray
regions represent the global strata of depth 1 and 2 in Figure 11(a)-(c) and (d), respectively . The gate
decomposition proceeds as follows.
(1) For each depth q and for each gate type f, the nodes under decomposition are collected into a set
G q
f . Then the global stratum of depth q, denoted as S q , is computed by the union of local strata of
depth q for all nodes in G q
f . In
Figure
{ x,y,z} and
{ a,b,c,d,e, f , g,h}. Based on G q
f and S q , we formulate the Global Stratum Bin Packing
(GSBP) problem (to be formally defined later). By solving the GSBP problem, we achieve (i) for
each node in G q
f , its local stratum of depth q are packed into min-height K-feasible bins, and (ii)
there are a minimum number of min-height K-feasible bins in total. The second objective is
achieved by packing common fanins for the nodes in G q
f . Intermediate nodes (also called bin
nodes) are created for bins. In Figure 11(b), nodes b and c, e and f, g and h are packed into bin
nodes i, j and k, respectively.
(2) It is possible that some nodes in G q
f have been decomposed completely (e.g., nodes x and z in
Figure
while the local strata of other nodes can be further packed (e.g., node y in Figure
11(b)). Both G q
f and S q are updated and a new instance of the GSBP problem for the same q value
is formulated and solved. The process iterates until the global stratum of depth q has been
minimally packed into bins (as a result, the network does not change). In Figure 11(b), we have
l
{ v,y}, and S { i,d, j,x}. By solving the GSBP problem for the updated G 1
f
and S 1 , node d and i are packed into a bin node m. Node y is now completely decomposed with a
label l 2. The process iterates with updated G 1
{ v} and S { x}. But no further packing are
possible for Figure 11(c)).
(3) Buffer nodes are created and labeled q every fanin in the global strata S q . The
decomposition process iterates steps (1) and (2) until the network is 2-bounded. In Figure 11(d), a
buffer node n is created for node x, nodes y and z are then packed into a bin, and the decomposition
of node v is completed.
Two points are worth mentioning. First, in DOGMA, each node is decomposed only after all its
fanins have been decomposed and labeled. In DOGMA-m, however, nodes could undergo decomposition
even though some of their fanins have not been labeled. For example, node v in Figure 11(b) is under
decomposition (v -G 1
f ) while its fanin y is not labeled yet. Second, for each depth q and gate type f,
multiple instances of the GSBP problem might be solved in order to pack local strata into a minimal
number of bins. For example, two instances of the GSBP problem are solved for before the local
stratum of node y is minimally packed (from Figure 11(a) to (c)). In our experiments, we found that
solving three instances of the GSBP problem are sufficient for each q value.
The Global Stratum Bin Packing (GSBP) Problem is formally defined as follows.
Global Stratum Bin Packing (GSBP) Problem Given a set G q
f of nodes of gate type f under
decomposition and a global stratum S q of depth q that contain fanins of nodes from G q , pack the fanins in
S q into a set of bins such that (i) for each node in G q , its local stratum of depth q are packed into min-
height K-feasible bins, (ii) there is a minimum number of min-height K-feasible bins in total.
To solve the GSBP problem, we build a matrix M where rows correspond to nodes in
G q
columns correspond to fanins in S {
not. A rectangle is a subset of rows and columns, denoted by a pair
(R,C) indicating the row and column subsets, where all entries are 1. C corresponds to a bin of fanins and
R corresponds to a set of nodes that share fanins in C. A solution of the GSBP problem is a rectangle
cover for M subject to that a K-feasible cut of height q - 1 exists for fanins in each column set C. This
matrix representation is similar to the cube-literal matrix used for solving the cube extraction problem
procedure DOGMA-m ( N, K )
/* N is the input network and K is LUT input size. */
Initialization
. until N is 2-bounded do
4 while not inc_q do
5 for each gate function type f do
{ u | label
8 Solve GSBP( G q
f , S q , K ) problem
9 for each min-height K-feasible bin B i created (if any) in GSBP do
create bin node w i , label (w i
to N, update fanins of nodes in G q
f
12 if no new bin node was created then
13 for each node u i - S q do
14 create buffer node b i , label (b i
return N
Figure
Multiple gate decomposition algorithm.
[Ru89, De94]. However, the algorithms for cube extraction can not be applied directly because the C in
every rectangle (R,C) must satisfy the K-feasible cut constraint.
We use the MC-FFD packing heuristic to compute a rectangle cover for the GSBP problem as
follows. First, compute the fanout factor
and the cut size s j of min-cut of height q - 1 for
a b c d e f
x
y
(a)00
z
a b c d e f
x
y
(b)00
z
Figure
13 FFD bin packing heuristic for the GSBP problem.
(a) Initial M. (b) The M after the first run of bin packing.
every fanin u j - S q . The weight of each fanin is
. s j . Then we sort the fanins according to their weights
and follow the MC-FFD bin packing heuristic to pack fanins into bins (starting from the fanin with the
largest weight). Our strategy is to group fanins of large cut sizes for obtaining a minimum number of bins
and to group fanins of large fanout sizes for exploiting common fanins. A set of fanins can be packed
into one bin C if (i) a K-feasible cut of height q - 1 exists for the fanins in C, and (ii) the largest rectangle
satisfies | R | - r min (i.e., at least r min nodes in G q
f share these fanins) where r min is a user-specified
parameter. By performing the MC-FFD packing heuristic, we obtain a set of rectangles. Each rectangle
(R,C) that satisfies | C | - c min (another user-specified parameter) will be saved and covered with 0's in
M. The MC-FFD packing procedure is repeated until M contains only 0's. A rectangle cover for M is
then obtained, and the set C in each rectangle corresponds to a bin. In our implementation, we set
in the first pass of the MC-FFD packing procedure, and decrease both values to 1 in
subsequent iterations. The decrease of values guarantees the termination of our procedure.
We demonstrate the MC-FFD packing heuristic on the network in Figure 11(a) for solving
the GSBP problem. The initial matrix M is shown in Figure 13(a). The rows correspond to nodes in
{ x,y,z} and the columns correspond to fanins in S { a,b,c,d,e, f , g,h}. The weight of each fanin
is its fanout size (i.e., the number of 1's in each column) since every fanin is a PI buffer whose cut size is
1. Fanins are sorted into the order b,c,e, f , a,d,g,h according to their weights. Nodes b and c are packed
into the first bin, which corresponds to the rectangle (R 1 , b,c}). Although there is a 3-
feasible cut of height 0 for nodes b,c,e, they can't be packed into one bin because the rectangle for them
have | R | = | { y} | < r 2. As a result, node e is put into a separate bin and packed with node f, which
corresponds to the rectangle (R 2 , e, f }). Then the two rectangles are covered with 0's
Figure
13(b)). We reset r another run of MC-FFD packing heuristic. Three
bins are obtained but only one bin contains two fanins. Totally, three bin nodes will be created. The
network in Figure 11(a) is now decomposed into the network in Figure 11(b).
original rugged
ckt gate fanin ckt gate fanin
Circuits size 3 >3 time(s) size 3 >3
z4ml
count 111 14% 0% 1.4 79 22% 20%
9symml 153 34% 8% 20.4 96 28% 35%
cordic 73 11% 8% 1.3 36 22% 28%
i3 70 0% 6% 2.2 78 0% 26%
alu2 210 17% 53% 29.9 172 19% 16%
alu4 416 13% 47% 22.0 374 16% 9%
rot 494 21% 39% 17.1 392 21% 18%
dalu 1939 10% 4% 3.0 595 42% 7%
too_large 1038 0% 100% 7.0 137 21% 35%
des
total 18824 16% 26% 317.4 13007 16% 12%
Table
optimization using the rugged script.
5. Experimental Results
We implemented DOGMA and DOGMA-m in C language and incorporated them into the RASP
logic synthesis system for FPGAs [CoPD96]. We prepared two sets of benchmarks in our experiments.
The first set C original consists of 24 original multi-level MCNC benchmarks which all contain a large
percentage of 2-unbounded gates (i.e., 3 or more inputs). We performed the rugged script in SIS
[SeSL92] for technology independent optimization and obtained the second set C rugged of benchmarks.
Both sets of benchmarks were transformed into simple-gate networks using AND-OR decomposition.
Table
2 shows the circuit sizes and fanin distributions of the two sets of simple-gate networks. The
benchmark set C original contains 18,824 simple gates with 42% of them being 2-unbounded, while the
benchmark set C rugged contains 13,007 simple gates with 28% of them being 2-unbounded. Clearly, both
circuit size and fanin size were reduced by performing the rugged script. The total runtime is less than 6
minutes.
We compared DOGMA and DOGMA-m with three structural gate decomposition algorithms, as
well as DOGMA-m with algebraic and Boolean decomposition approaches in our experiments. The three
structural gate decomposition algorithms used for comparison were the tech_decomp algorithm
[SeSL92], the dmig algorithm [Wa89, ChCD92], and our implementation of the Chortle-d algorithm
[FrRV91b]. After gate decomposition by each of these algorithms, CutMap [CoHw95] was employed to
obtain depth-optimal mapping solutions. For a comparison across structural, algebraic, and Boolean gate
decompositions, we employed DOGMA-m, speed_up in SIS [SeSL92] and the TOS package
[EcLL96] to perform decompositions, respectively. Again, CutMap was employed to perform LUT
mapping except for TOS since it produced LUT networks directly. The objective of gate decomposition
and LUT mapping in our experiments was to minimize mapping depth. CutMap also minimizes
mapping area as the second objective. All experiments were performed on a Sun ULTRA2 workstation
with 256M of memory.
We first demonstrate the impact of further gate decomposition on depth and area in technology
mapping. According to Theorem 1, the mapping solution space expands regardless of the gate
decomposition algorithm used. We use tech_decomp to decompose benchmarks in C rugged into 5-
bounded networks, and subsequently into 2-bounded networks, followed by LUT mapping to obtain
mapping solutions. The sizes of 5-bounded networks increase substantially comparing to the 5-
unbounded networks in C rugged . However, the percentages of 2-unbounded gates are about the same. We
employed CutMap [CoHw95] and DFMap [CoDi94b] to produce depth-optimal and duplication free
5-bounded CutMap DFMap
ckt gate fanin 5-bounded 2-bounded 5-bounded 2-bounded
Circuits size 3 >3 D A D A D A D A
count 79 22 20 5 31 5 31
9symml 131 24 28 7 90 6 105
alu4 434 21 9
rot
too_large 219
des
total 16007
ratio 1.00 1.00 0.84 1.01 1.00 1.00 1.02 0.84
Table
3 Comparison of results for 5-bounded and 2-bounded networks.
area-optimal mapping solutions, respectively. In Table 3, we see that both the optimal mapping depth (by
CutMap) and the optimal duplication-free mapping area (by DFMap) are reduced by 16% when the 5-
bounded networks are further decomposed into 2-bounded networks. These results confirm the results
stated in Theorem 1.
Structural gate decomposition algorithms
tech_decomp dmig chortle-d DOGMA DOGMA-m
Circuits D A T(s) D A T(s) D A T(s) D A T(s) D A T(s)
z4ml 4
count 5
9symml 5
dalu 9 507 3.7 9 513 4.3 9 507 12.0 9 506 175.8 9 497 19.9
too_large 7 4867 26.3 7 4700 297.4 6 3913 137.6 6 3867 456.9 6 2124 1680.7
C6288 22 728 4.5 22 728 5.1 22 728 85.9 22 728 854.9 22 728 42.6
des 5
total
ratio 1.11 1.50 0.03 1.05 1.48 0.12 1.06 1.41 0.12 1.01 1.39 0.69 1.00 1.00 1.00
Table
4 Comparison of results using tech_decomp, dmig, chortle-d, DOGMA and DOGMA-m
for gate decomposition followed by CutMap for circuits in C original .
Next, we compared five structural gate decomposition algorithms (tech_decomp, dmig, Chortle-
d, DOGMA, and DOGMA-m) on benchmarks in C original and C rugged using CutMap as the mapping
engine. The depth and area of mapping solutions as well as the runtimes of the compared algorithms (not
including CutMap time) for the two sets of benchmarks are presented in Tables 4 and 5, respectively.
Comparing to DOGMA-m, we see that the other four algorithms result in up to 11% larger mapping
depth and up to 50% larger mapping area on the benchmark set C original , and up to 16% larger mapping
depth and up to 10% larger mapping area on the benchmark set C rugged . The differences in mapping
depth obtained by DOGMA-m and dmig or DOGMA are marginal, while the differences in mapping
area are more significant. Regarding the runtime, DOGMA-m runtime is comparable to DOGMA
runtime, but is 8 to 33 times slower than the runtimes of other three algorithms. However, DOGMA-m
runtime is in the same order of magnitude as the time spent in performing the rugged script or CutMap.
Structural gate decomposition algorithms
tech_decomp dmig chortle-d DOGMA DOGMA-m
Circuits D A T(s) D A T(s) D A T(s) D A T(s) D A T(s)
count 5
cordic 5
rot 9 270 1.0 7 259 1.1 8 265 2.5 7 267 6.2 7 261 5.4
too_large 6 162 0.4 5 161 0.6 5 185 1.1 5
C6288 22 727 4.3 22 690 4.9 22 690 19.4 22 724 769.2 22 723 192.0
des 6 1087 4.5 5 1058 5.3 6 1127 14.4 5 965 49.0 5 969 208.5
total 196 7857 32.0 176 7773 42.8 182 7836 103.4 171 7689 1130.9 169 7144 919.0
ratio 1.16 1.10 0.03 1.04 1.09
Table
5 Comparison of results using tech_decomp, dmig, chortle-d, DOGMA and DOGMA-m
for gate decomposition followed by CutMap for circuits in C rugged .
Comparing Tables 4 and 5, we see that the mapping area for C rugged is 30% to 50% smaller than
that for C orginal , while the mapping depth for C rugged is 1% to 7% larger than that for C original . It shows
that the rugged script, which performs logic optimization based on algebraic divisions, is very effective
for area minimization but not as effective for depth minimization. A benefit resulted from the area
reduction is the significant decrease of runtime for all decomposition algorithms. For benchmarks in
rugged , DOGMA-m results in 10% smaller area comparing to the other four algorithms under
comparison. It shows that DOGMA-m can exploit common fanins for area minimization in addition to
the rugged script.
Finally, we employed DOGMA-m, speed_up and TOS for a comparison across structural,
algebraic, and Boolean gate decomposition approaches. We configured TOS for delay-oriented synthesis
in the medium-effort mode performing both single-output (TOS-s) and multi-output (TOS-m) functional
decompositions. The input circuits to TOS were prepared as follows. First, we tried to collapse each
benchmark in C rugged into a flat logic network within 30 minutes of CPU time. If this could not be done,
we used the reduce_depth -depth d command provided in TOS to collapse benchmarks into networks
of the smallest depth d where d - 2. We allocated minutes of CPU time for each depth d starting from
Among all benchmarks after collapsing, rot and C880 have a depth of 2, C432,
C2670, C5315, and C7552 have a depth of 3, C3540 and i10 have a depth of 4, and C6288 has a
depth of 6. The remaining benchmarks are completely collapsed.
Table
6 collects the mapping results by DOGMA
TOS-m. Subtotal1, subtotal2, and subtotal3 are totals of the mapping results for benchmarks that
speed_up, TOS-s, and TOS-m succeed, respectively, and the ratios measure the relative performances
of these approaches with respect to DOGMA-m CutMap. The time T(s) reports the computation time
in seconds. In Table 6, we see that DOGMA-m + CutMap is able to map all benchmarks in 23 minutes,
while speed_up fail to map some benchmarks after 2 hours.
Comparing to DOGMA-m takes more than 5 hours (98% consumed
Technology mapping (gate decomposition and LUT mapping) algorithms
DOGMA-m speed_up TOS-TUM (medium effort)
CutMap single-output multiple-output
Circuits D A T(s) D A T(s) D A T(s) D A T(s)
count 5 31 1.0 3 52 12.2 2 42 8.4 3 38 24.7
rot 7 261 13.2 6 251 71.1 7 404 117.5 8 291 612.0
too_large 5 149 7.5 5 112 24.3 9 324 465.0 8 168 1395.4
C6288 22 723 213.0
des 5 969 263.9 - 4 704 1586.8 -
subtotal2 139 6266 1103.3 139 11294 15792.6
ratio 1.00 1.00 1.00 0.87 0.94 17.60 1.00 1.80 14.31 1.01 1.11 29.83
Table
6 Mapping results resulted from structural (DOGMA-m), algebraic (speed_up),
and Boolean (TOS) gate decomposition approaches on C rugged .
by speed_up) to map 23 benchmarks (not including des) but obtains significantly better results: 13%
smaller mapping depth and 6% smaller mapping area. The results on C432 show the largest contrast
between the performance of speed_up and the efficiency of DOGMA-m: speed_up results in a
mapping depth of 8 in more than 2 hours while DOGMA-m results in a mapping depth of 11 in 6.6
seconds. TOS-s and TOS-m do not return mapping solutions in allocated CPU times for 3 and 8
benchmarks, respectively. Comparing to the other two approaches, TOS-s obtains smaller mapping
depth on count,9sym m l,alu2,alu4 and t481, and TOS-m obtains smaller mapping area
on 9symml, cordic, x1, alu2 and t481. It is worth noting that TOS is extremely successful
for 9symml and t481. The results indicate that functional decomposition based mapping approaches
require longer computation time to obtain good results, especially on circuits of medium to large sizes.
Overall, from these experiments, we conclude that DOGMA-m can obtain the best mapping results
among five structural gate decomposition algorithms under comparison, and is much more efficient in
terms of runtime (over 17 and times faster, respectively) comparing to the algebraic decomposition
algorithm speed_up and the functional decomposition approach TOS. However, speed_up obtains the
best results among compared approaches.
6. Conclusion
In this paper, we present an in-depth study of structural gate decomposition for depth-optimal
technology mapping in LUT-based FPGA designs. We show that any structural gate decomposition in
K-bounded networks can only result in a smaller depth in K-LUT mapping solutions regardless of the
decomposition algorithm used. Therefore, it is always beneficial to decompose circuits into 2-bounded
networks for depth minimization when structural decompositions are applied. We prove that the
structural gate decomposition problem in depth-optimal technology mapping is NP-hard for K-unbounded
networks when the LUT input size K - 3 and remains NP-hard for K-bounded networks when K - 5. We
propose two new algorithms, named DOGMA and DOGMA-m, which combine the level-driven node
packing technique in Chortle-d and the network flow based labeling technique in FlowMap, for
structural gate decomposition. DOGMA-m decomposes multiple gates simultaneously to exploit
common fanins. The following experimental results have been observed. First, the optimal mapping
depth and the optimal duplication-free mapping area can be reduced by 16% if 5-bounded networks are
decomposed structurally into 2-bounded networks. Second, applying the rugged script for technology
independent logic optimization before technology mapping can result in 40% to 50% area reduction with
only marginal increase in depth, while significantly reduce the runtime of structural decomposition
algorithms. Third, DOGMA-m results in the smallest mapping depth and mapping area among five
structural gate decomposition algorithms under comparison. Finally, comparing three algorithms
DOGMA-m, speed_up, and TOS, which take structural, algebraic, and Boolean (functional
gate decomposition approaches respectively, DOGMA-m can decompose all tested
benchmarks in a short time, while speed_up and TOS fail to obtain results on some benchmarks.
However, speed_up results in 13% smaller depth and 6% smaller area in final mapping solutions
comparing to DOGMA-m.
Acknowledgement
The authors are very grateful to Mr. Legl in Professor Antreich's group in the Institute of Electronic
Design Automation, Technical University of Munich, Germany, for providing us with the TOS logic
synthesis package. The authors would like to acknowledge the supports from NSF Young Investigator
(NYI) Award MIP-9357582, grants from Xilinx, Quickturn, and Lucent Technologies under the
California MICRO programs, and software donation from Synopsys.
--R
"The Decomposition of Switching Functions,"
"DAG-Map: Graph-based FPGA Technology Mapping for Delay Optimization,"
"Beyond the Combinatorial Limit in Depth Minimization for LUT-Based FPGA Designs,"
"FlowMap: An Optimal Technology Mapping Algorithm for Delay Optimization in Lookup-Table Based FPGA Designs,"
"On Area/Depth Trade-off in LUT-Based FPGA Technology Mapping,"
"On Nominal Delay Minimization in LUT-Based FPGA Technology Mapping,"
"Combinational Logic Synthesis for LUT Based Field Programmable Gate Arrays,"
"Simultaneous Depth and Area Minimization in LUT-Based FPGA Mapping,"
"Structural Gate Decomposition for Depth-Optimal Technology Mapping in LUT-based FPGA Designs,"
"RASP: A General Logic Synthesis System for SRAM-based FPGAs,"
"A Generalized Tree Circuit,"
"Synthesis and Optimization of Digital Circuits,"
"TOS-2.2 Technology Oriented Synthesis User Manual,"
"Chortle: A Technology Mapping Program for Lookup Table -Based Field Programmable Gate Arrays,"
"Chortle-crf: Fast Technology Mapping for Lookup Table -Based FPGAs,"
"Technology Mapping of Lookup Table-Based FPGAs for Performance,"
Computer and Intractability: A Guide to the Theory of NP- Completeness
Fundamentals of Computer Algorithms
"An Iterative Area/Performance Trade-Off Algorithm for LUT-based FPGA Technology Mapping,"
"FPGA Synthesis using Function Decomposition,"
"Performance-Directed Technology Mapping for LUT-Based FPGAs - What Role Do Decomposition and Covering Play?,"
"A Boolean Approach to Performance-Directed Technology Mapping for LUT-Based FPGA Designs,"
"Performance Directed Synthesis for Table Look Up Programmable Gate Arrays,"
"Minimization Over Boolean Graphs,"
"Logic Synthesis for VLSI Design,"
"Performance Directed Technology Mapping for Look-Up Table Based FPGAs,"
"SIS: A System for Sequential Circuit Synthesis,"
"Algorithms for Multi-level Logic Optimization,"
"Functional Multiple-Output Decomposition: Theory and an Implicit Algorithm,"
"Edge-Map: Optimal Performance Driven Technology Mapping for Iterative LUT Based FPGA Designs,"
--TR
Chortle-crf: Fast technology mapping for lookup table-based FPGAs
Algorithms for multilevel logic optimization
Performance directed technology mapping for look-up table based FPGAs
Edge-map
Simultaneous depth and area minimization in LUT-based FPGA mapping
On nominal delay minimization in LUT-based FPGA technology mapping
Functional multiple-output decomposition
Combinational logic synthesis for LUT based field programmable gate arrays
A Boolean approach to performance-directed technology mapping for LUT-based FPGA designs
An iterative area/performance trade-off algorithm for LUT-based FPGA technology mapping
Beyond the combinatorial limit in depth minimization for LUT-based FPGA designs
A Generalized Tree Circuit
Synthesis and Optimization of Digital Circuits
Computers and Intractability
DAG-Map
FPGA Synthesis Using Function Decomposition
Performance-Directed Technology-Mapping for LUT-Based FPGAs - What Role Do Decomposition and Covering Play? | simplification;technology mapping;delay minimization;system design;decomposition;logic optimization;programmable logic;computer-aided design of VSLI;synthesis;FPGA |
335179 | Affine Structure and Motion from Points, Lines and Conics. | In this paper several new methods for estimating scene structure and camera motion from an image sequence taken by affine cameras are presented. All methods can incorporate both point, line and conic features in a unified manner. The correspondence between features in different images is assumed to be known.Three new tensor representations are introduced describing the viewing geometry for two and three cameras. The centred affine epipoles can be used to constrain the location of corresponding points and conics in two images. The third order, or alternatively, the reduced third order centred affine tensors can be used to constrain the locations of corresponding points, lines and conics in three images. The reduced third order tensors contain only 12 components compared to the components obtained when reducing the trifocal tensor to affine cameras.A new factorization method is presented. The novelty lies in the ability to handle not only point features, but also line and conic features concurrently. Another complementary method based on the so-called closure constraints is also presented. The advantage of this method is the ability to handle missing data in a simple and uniform manner. Finally, experiments performed on both simulated and real data are given, including a comparison with other methods. | Introduction
Reconstruction of a three-dimensional object from
a number of its two-dimensional images is one of
the core problems in computer vision. Both the
structure of the object and the motion of the camera
are assumed to be unknown. Many approaches
have been proposed to this problem and apart
Supported by the ESPRIT Reactive LTR project 21914,
CUMULI
Supported by the Swedish Research Council for Engineering
Sciences (TFR), project 95-64-222
from the reconstructed object also the camera motion
is obtained, cf. (Tomasi and Kanade 1992,
Koenderink and van Doorn 1991, McLauchlan
and Murray 1995, Sturm and Triggs 1996, Sparr
1996, Shashua and Navab 1996, Weng, Huang and
Ahuja 1992, Ma 1993).
There are two major diOEculties that have to
be dealt with. The -rst one is to obtain corresponding
points (or lines, conics, etc.) through-out
the sequence. The second one is to choose an
appropriate camera model, e.g., perspective (cali-
brated or uncalibrated), weak perspective, aOEne,
etc. Moreover, these two problems are not completely
?separated, but in some sense coupled to
each other, which will be explained in more detail
later.
The -rst problem of obtaining feature correspondences
between dioeerent images is simpli-ed
if the viewing positions are close together. How-
ever, most reconstruction algorithms break down
when the viewpoints are close together, especially
in the perspective case. The correspondence problem
is not addressed here. Instead we assume that
the correspondences are known.
The problem of choosing an appropriate camera
model is somewhat complex. If the intrinsic
parameters of the camera are known, it seems
reasonable to choose the calibrated perspective
(pinhole) camera, see (Maybank 1993). If the intrinsic
parameters are unknown, many researchers
have proposed the uncalibrated perspective (pro-
jective) camera, see (Faugeras 1992). This is the
most appealing choice from a theoretical point of
view, but in practice it has a lot of drawbacks.
Firstly, only the projective structure of the scene is
recovered, which is often not suOEcient. Secondly,
the images have to be captured from widespread
locations, with large perspective eoeects, which is
rarely the case if the imaging situation cannot
be completely controlled. If this condition is not
ful-lled, the reconstruction algorithm may give a
very inaccurate result and might even break down
completely. Thirdly, the projective group is in
some sense too large for practical applications.
Theoretically, the projective group is the correct
choice, but only a small part of the group is actually
relevant for most practical situations, leading
to too many degrees of freedom in the model.
Another proposed camera model is the aOEne
one, see (Mundy and Zisserman 1992), which is an
approximation of the perspective camera model.
This is the model that will be used in this paper.
The advantages of using the aOEne camera model,
compared to the perspective one, are many-fold.
Firstly, the aOEne structure of the scene is obtained
instead of the projective in the uncalibrated
perspective case. Secondly, the images may be
captured from nearby locations without the algorithms
breaking down. Again, this facilitates the
correspondence problem. Thirdly, the geometry
and algebra are more simple, leading to more eOE-
cient and robust reconstruction algorithms. Also,
there is a lack of satisfactory algorithms for non-point
features in the perspective case, especially
for conics and curves.
This paper presents an integrated approach to
the structure and motion problem for aOEne cam-
eras. We extend current approaches to aOEne
structure and motion in several directions, cf.
(Tomasi and Kanade 1992, Shapiro, Zisserman
and Brady 1995, Quan and Kanade 1997, Koenderink
and van Doorn 1991). One popular reconstruction
method for aOEne cameras is the Tomasi-
Kanade factorization method for point correspon-
dences, see (Tomasi and Kanade 1992). We will
generalize the factorization idea to be able to incorporate
also corresponding lines and conics. In
(Quan and Kanade 1997) a line-based factorization
method is presented and in (Triggs 1996) a
factorization algorithm for both points and lines
in the projective case is given.
Another approach to reconstruction from images
is to use the so-called matching constraints.
These constraints are polynomial expressions in
the image coordinates and they constrain the locations
of corresponding features in two, three or
four images, see (Triggs 1997, Heyden 1995) for a
thorough treatment in the projective case. The
drawback of using matching constraints is that
only two, three or four images can be used at
the same time. The advantage is that missing
data, e.g. a point that is not visible in all images,
can be handled automatically. In this paper the
corresponding matching constraints for the aOEne
camera in two and three images are derived. Specializing
the projective matching constraints di-
rectly, like in (Torr 1995), will lead to a large over-
parameterization. We will not follow this path,
instead the properties of the aOEne camera will be
taken into account and a more eoeective parameterization
is obtained. It is also shown how to
concatenate these constraints in a uni-ed manner
to be able to cope with sequences of images.
This will be done using the so-called closure con-
straints, constraining the coeOEcients of the matching
constraints and the camera matrices. Similar
constraints have been developed in the projective
case, see (Triggs 1997). Some attempts to deal
with the missing data problem have been made in
(Tomasi and Kanade 1992, Jacobs 1997). We describe
these methods and the relationship to our
approach based on closure constraints, and we also
provide an experimental comparison with Jacobs'
method.
Preliminary results of this work, primarily
based on the matching constraints for image
triplets and the factorization method can be found
in (Kahl and Heyden 1998). Recently, the matching
constraints for two and three aOEne views have
also been derived in a similar manner, but in-
dependently, in two other papers. In (Bretzner
and Lindeberg 1998), the projective trifocal tensor
is -rst specialized to the aOEne case, like in
(Torr 1995), resulting in 16 non-zero coeOEcients
in the trifocal tensor. Then, they introduce the
centred aOEne trifocal tensor by using relative co-
ordinates, reducing the number of coeOEcients to
12. From these representations, they calculate the
three orthographic camera matrices corresponding
to these views in a rather complicated way.
A factorization method for points and lines for
longer sequences is also developed. In (Quan
and Ohta 1998) the two-view and three-view constraints
are derived in a nice and compact way
for centred aOEne cameras. By examining the relationships
between the two- and three-view con-
straints, they are able to reduce the number of co-
eOEcients to only 10 for the three-view case. These
coeOEcients for three aOEne cameras are then
directly related to the parameters of three orthographic
cameras. Our presentation of the matching
constraints is similar to the one in (Quan and
Ohta 1998), but we prefere to use a tensorial no-
tation. While we pursue the path of coping with
longer image sequences, their work is more focused
on obtaining a Euclidean reconstruction limited to
three calibrated cameras.
The paper is organized as follows. In Section 2,
we give a brief review of the aOEne camera, describing
how points, lines and conics project onto
the image plane. In Section 3, the matching constraints
for two and three views are described. For
arbitrary many views, two alternative approaches
are presented. The -rst one, in Section 4, is based
on factorization and the second one, in Section 5,
is based on closure constraints that can handle
missing data. In Section 5, we also describe two
related methods to the missing data problem. A
number of experiments, performed on both simulated
and on real data, is presented in Section 6.
Finally, in Section 7, some conclusions are given.
2. The affine camera model
In this section we give a brief review of the
aOEne camera model and describe how dioeerent
points, lines and quadrics are projected onto the
image plane. For a more thorough treatment,
see (Shapiro 1995) for points and (Quan and
Kanade 1997) for lines.
The projective/perspective camera is modeled
by
x-
where P denotes the standard 3 \Theta 4 camera matrix
and - a scale factor. Here X is a 3-vector and x is
a 2-vector, denoting point coordinates in the 3D
scene and in the image respectively.
The aOEne camera model, -rst introduced by
Mundy and Zisserman in (Mundy and Zisserman
1992), has the same form as (1), but the camera
matrix is restricted to
22
and the homogeneous scale factor - is the same for
all points. It is an approximation of the projective
camera and it generalizes the orthographic, the
perspective and the para-perspective camera
models. These models provide a good approximation
of the projective camera when the distances
between dioeerent points of the object is small compared
to the viewing distance. The aOEne camera
has eight degrees of freedom, since (2) is only de-
-ned up to a scale factor, and it can be seen as
a projective camera with its optical centre on the
plane at in-nity.
Rewriting the camera equation (1) with the
aOEne restriction (2), the equation can be written
where
A =p 34
and b =p 34
A simpli-cation can be obtained by using relative
coordinates with respect to some reference
in the object and the corresponding
point in the image. Introducing
the relative coordinates
, (3) simpli-es to
In the following, the reference point will be chosen
as the centroid of the point con-guration, since
the centroid of the three-dimensional point con-
-guration projects onto the centroid of the two-dimensional
point con-guration. Notice that the
visible point con-guration may dioeer from view to
view and thus the centroid changes from view to
view. This must be considered and we will comment
upon it later.
A line in the scene through a point X with direction
D can be written
With the aOEne camera, this line is projected to
the image line, l, through the point
according to
Thus, it follows that the direction, d, of the image
line is obtained as
This observation was -rst made in (Quan and
Kanade 1997). Notice that the only dioeerence
between the projection of points in (4) and the
projection of directions of lines in (6) is the scale
present in (6), but not in (4). Thus, with
known scale factor -, a direction can be treated
as an ordinary point. This fact will be used later
on in the factorization algorithm.
For conics, the situation is a little more complicated
than for points and lines. A general conic
curve in the plane can be represented by its dual
form, the conic envelope,
where l denotes a 3 \Theta 3 symmetric matrix and
extended dual coordinates
in the image plane. In the same way, a general
quadric surface in the scene can be represented by
its dual form, the quadric envelope,
where L denotes a 4 \Theta 4 symmetric matrix and
extended dual coordinates
in the 3D space. A conic or a quadric, (7)
or (8), is said to be proper if its matrix is non-
singular, otherwise it is said to be degenerate. For
most practical situations, it is suOEcient to know
that a quadric envelope degenerates into a disc
quadric, i.e., a conic lying in a plane in space. For
more details, see (Semple and Kneebone 1952).
The image, under a perspective projection, of a
quadric, L, is a conic, l. This relation is expressed
by
where P is the camera matrix and - a scale factor.
Introducing
l =4
l 2
l 3
l 5
l 4
l 5
l
and specializing (9) to the aOEne camera (3) gives
two set of equations. The -rst set is
l 1
l 2
l 2
l 3
\Theta
\Theta
containing three non-linear equations in A and b.
Normalizing l such that l 6
L such that
1, the second set becomes
l 4
l 5
containing three linear equations in A and b. Observe
that this equation is of the same form as
(3), which implies that conics can be treated in
the same way as points, when the non-linear equations
in (10) are omitted.
The geometrical interpretation of (11) is that
the centre of the quadric projects onto the centre
of the conic in the image, since indeed
corresponds to the centre of the
conic. This can be seen by parameterizing the
conic by its centre point then expressing it in the
form of (7).
3. Affine matching constraints
The matching constraints in the projective case
are well-known and they can directly be specialized
to the aOEne case, cf. (Torr 1995). However,
we will not follow this path. Instead, we start from
the aOEne camera equation in (4) leading to fewer
parameters and thereby a more eoeective way of
parameterizing the matching constraints.
We will from now on assume that relative coordinates
have been chosen and use the notation
I x 2
I
for relative coordinates. The subindex indicates
that the image point belongs to image I .
3.1. Two-view constraints
Denote the two camera matrices corresponding to
views number I and J by A I and A J and an arbitrary
3D-point by X (in relative coordinates).
Then (4) gives for these two images x I = A I X
and x equivalently,
A I x I
A J x J
Thus, it follows that det since M has a
non-trivial nullspace. Expanding the determinant
by the last column gives one linear equation in
the image coordinates x
I x 2
I
. The coeOEcients of this linear equation
depend only on the camera matrices A I and A J .
Therefore, let
A I
A J
De-nition 1. The minors built up by three
dioeerent rows from E IJ in (12) will be called the
centred aOEne epipoles and its 4 components will
be denoted by E
and
I
A J
and JI e
A I
where A i
I denotes the ith row of A I and similarly
for A J .
Remark. The vector IJ is the
well-known epipole or epipolar direction, i.e., the
projection in camera I of the focal point corresponding
to camera J . Here the focal point is a
point on the plane at in-nity, corresponding to the
direction of projection.
Observe that E IJ is built up by two dioeerent
tensors, IJ e i and JI e j , which are contravariant
tensors. This terminology alludes to the transformation
properties of the tensor components. In
fact, consider a change of image coordinates from
x to -
x according to
equivalently x
where S denotes a non-singular 2 \Theta 2 matrix and
denotes i.e., the element with row-index
i and column-index i 0 of S. Then the tensor components
change according to
Observe that Einstein's summation convention
has been used, i.e., when an index appears twice
in a formula it is assumed that a summation is
made over that index.
Using this notation the two-view constraint can
be written in tensor form as
denotes the permutation symbol, i.e.,
Using instead
vector notations the constraint can be written
as
where - denote the 2-component cross product,
i.e., Writing out
explicitly gives
Remark. The tensors could equivalently have
been de-ned as
I
A J
giving a covariant tensor instead. The relations
between these tensors are IJ e
and IJ e i IJ e The two-view constraints
can now simply be written, using the co-variant
epipolar tensors, as
I
The choice of covariant or contravariant indices for
these 2D tensors is merely a matter of taste. The
choice made here to use the contravariant tensors
is done because they have physical interpretations
as epipoles.
The four components of the centred aOEne
epipoles can be estimated linearly from at least
four point or conic correspondences in the two im-
ages. In fact, each corresponding feature gives one
linear constraint on the components and the use of
relative coordinates makes one constraint linearly
dependent on the other ones. Corresponding lines
in only two views do not constrain the camera mo-
tion. From (14) follows that the components can
only be determined up to scale. This means that
are centred aOEne epipoles,
then -E are
also centred aOEne epipoles corresponding to the
same viewing geometry. This undetermined scale
factor corresponds to the possibility to rescale
both the reconstruction and the camera matrices,
keeping (4) valid.
The tensor components parameterize the epipolar
geometry in two views. However, the camera
matrices are only determined up to an unknown
aOEne transformation. One possible choice of camera
matrices is given by the following proposition.
Proposition 1. Given centred aOEne epipoles,
normalized such that JI e
a set of corresponding camera matrices is given by
Proof: The result follows from straightforward
calculations of the minors in (12).
3.2. Three-view constraints
Denote the three camera matrices corresponding
to views number I , J and K by A I , A J and AK
and an arbitrary 3D-point by X. Then, the projection
of X (in relative coordinates) in these images
are given by x I = A I X, x
according to (4), or equivalently
A I x I
A J x J
Thus, it follows that rank M ! 4 since M has a
non-trivial nullspace. This means that all 4 \Theta 4
minors of M vanish. There are in total (
such minors and expanding these minors by the
last column gives linear equations in the image
coordinates x I , x J and xK . The coeOEcients of
these linear equations are minors formed by three
rows from the camera matrices A I , A J and AK .
Let
A I
A J
The minors from (16) are the Grassman coordinates
of the linear subspace of R 6 spanned by
the columns of T IJK . We will use a slightly different
terminology and notation, according to the
following de-nition.
De-nition 2. The ( determinants
of the matrices built up by three rows
from T IJK in (16) will be denoted by T
denotes the previously de-ned centred aOEne
epipoles and t ijk will be called the centred aOEne
tensor de-ned by
I
where A i
I again denotes the ith row of A I and all
indices i, j and k range from 1 to 2.
Observe that T IJK is built up by 7 dioeerent ten-
sors, the 6 centred aOEne epipoles, IJ e i , etc., and
a third order tensor t ijk , which is contravariant
in all indices 1 . This third order tensor transforms
according to
when coordinates in the images are changed according
to (13) in image I and similarly for image
J and K using matrices U and V instead of S.
Given point coordinates in all three images, the
minors obtained from M in (15) yield linear constraints
on the 20 numbers in the centred aOEne
tensors. One example of such a linear equation,
obtained by picking the -rst, second, third and
-fth row of M is
The general form of such a constraint is
or
where the last equation is the previously de-ned
two-view constraint. In (18), j and k can be chosen
in 4 dioeerent ways and the dioeerent images
can be permuted in 3 ways, so there are 12 linear
constraints from this equation. Adding the 3
additional two-view constraints from (19) gives in
total 15 linear constraints on the 20 tensor com-
ponents. All constraints can be written
where R is a 15 \Theta 20 matrix containing relative image
coordinates of the image point and t is a vector
containing the 20 components of the centred aOEne
tensor. From (20), it follows that the overall scale
of the tensor components cannot be determined.
Observe that since relative coordinates are used,
one point alone gives no constraints on the tensor
components, since its relative coordinates are
all zero. The number of linearly independent constraints
for dioeerent number of point correspondences
is given by the following proposition.
Proposition 2. Two corresponding points in 3
images give in general 10 linearly independent
constraints on the components of T IJK Three
points give in general 16 constraints and four or
more points give in general 19 constraints. Thus
the centred aOEne tensor and the centred aOEne
epipoles can in general be linearly recovered from
at least four point correspondences in 3 images.
Proof: See Appendix A.
The next question is how to calculate the camera
matrices A I , A J and AK from the 20 tensor
components of T IJK . Observe -rst that the camera
matrices can never be recovered uniquely, since
a multiplication by an arbitrary non-singular 3 \Theta 3
matrix to the right of T ijk in (16) only changes the
common scale of the tensor components. The following
proposition maps T IJK to one set of compatible
camera matrices.
Proposition 3. Given T IJK normalized such
that t the camera matrices can be calculated
as
and
Proof: Since the camera matrices are only determined
up to an aOEne transformation, the -rst
rows of A I , A J and AK can be set to the 3\Theta3 iden-
tity. The remaining components are determined
by straightforward calculations of the minors in
(16).
We now turn to the use of line correspondences
to constrain the components of the aOEne tensors.
According to (6) the direction of a line projects
similar to the projection of a point except for the
extra scale factor. Consider (6) for three dioeerent
images of a line with direction D in 3D space,
- I d I = A I D;
Since these equations are linear in the scale factors
and in D, they can be written
\Gamma- I
\Gamma- J
A I d I 0 0
A J 0 d J 0
\Gamma- I
\Gamma- J
\Gamma- K
Thus the nullspace of N is non-empty, hence
det Expanding this determinant, we get
I d j 0
i.e., a trilinear expression in d 1
and d 3
with
coeOEcients that are the components of the centred
aOEne tensor included in T IJK . Finally, we
conclude that the direction of each line gives one
constraint on the viewing geometry and that both
points and lines can be used to constrain the tensor
components 2 .
3.3. Reduced three-view constraints
It may seem superAEuous to use 20 numbers to describe
the viewing geometry of three aOEne cam-
eras, since specializing the trifocal tensor (which
has 27 components) for the projective camera, to
the aOEne case, the number of components reduces
to only 16 without using relative coordinates, cf.
(Torr 1995). Since our 20 numbers describe all
trilinear functions between three aOEne views, the
comparison is not fair, even if the specialization
of the trifocal tensor also encodes the information
about the base points. It should be compared with
the 3 \Theta components of
all trifocal tensors between three aOEne views and
three projective views, respectively. Although, it
is possible to use a tensorial representation with
only 12 components to describe the viewing geometry
In order to obtain a smaller number of parame-
ters, start again from (15) and rank M - 3. This
time we will only consider the 4 \Theta 4 minors of M
that contain both of the rows one and two, one
of the rows three and four, and one of the rows
-ve and six. There are in total 4 such minors and
they are linear in the coordinates of x I , x J and
xK . Again, these trilinear expressions have coef-
-cients that are minors of T IJK in (16), but this
time the only minors occurring are the ones containing
either both rows from A I and one from A J
or AK , or one row from each one of A I , A J and
AK .
De-nition 3. The minors built up by rows
and k from T IJK in (16), where either i 2
will be called the reduced centred
aOEne tensors and the 12 components will be denoted
by T r
the previously de-ned centred aOEne epipoles
and t denotes the previously de-ned centred aOEne
tensor in (17).
Observe that T r
IJK is built up by three dioeer-
ent tensors, the tow centred aOEne epipoles, JI e j
and KI e k , which are contravariant tensors and the
third order tensor t ijk , which is contravariant in
all indices.
Given the image coordinates in all three images,
the chosen minors obtained from M give linear
constraints on the 12 components of T r
IJK . There
are in total 4 such linear constraints and they can
be written
which can be written as
R r is a 4 \Theta 12 matrix containing relative
image coordinates of the image point and t r is
a vector containing the 12 components of the reduced
centred aOEne tensors. Observe again that
the overall scale of the tensor components can not
be determined. The number of linearly independent
constraints for dioeerent number of point correspondences
are given in the following proposition
Proposition 4. Two corresponding points in 3
images give 4 linearly independent constraints on
the reduced centred aOEne tensors. Three points
give 8 linearly independent constraints and four
or more points give 11 linearly independent con-
straints. Thus the tensor components can be linearly
recovered from at least four point correspondences
in 3 images.
Proof: See Appendix A.
Again the camera matrices can be calculated
from the 12 tensor components.
Proposition 5. Given T r
IJK normalized such
that t the camera matrices can be calculated
9as
a 21
a 22
a 23
and
A 3
a 31 a 32 a 33
where
a
a
a 23
a
a
a
Proof: The form of the elements a 22
and a 33
follows by direct calculations of the determinants
corresponding to t 212 and t 221 , respectively. The
others follow from taking suitable minors and solving
the linear equations.
Using these combinations of tensors, a number
of minimal cases appear for recovering the viewing
geometry. In order to solve these minimal cases
one has to take also the non-linear constraints on
the tensor components into account. However, in
the present work, we concentrate on developing a
method to use points, lines and conics in a uni-ed
manner, when there is a suOEcient number of corresponding
features available to avoid the minimal
cases.
4. Factorization
Reconstruction using matching constraints is limited
to a few views only. In this section, a factorization
based technique is given that handle
arbitrarily many views for corresponding points,
lines and conics. The idea of factorization is sim-
ple, but still a robust and eoeective way of recovering
structure and motion. Previously with the
matching constraints only the centre of the conic
was used, but there are obviously more constraints
that could be used. After having described the
general factorization method, we show one possible
way of incorporating this extra information.
Now consider m points or conics, and n lines in
images. (4) and (6) can be written as one single
matrix equation (with relative coordinates),
A p7 5
\Theta
The right-hand side of (26) is the product of a
2p \Theta 3 matrix and a 3 \Theta (m + n) matrix, which
gives the following theorem.
Theorem 1. The matrix S in (26) obeys
Observe that the matrix S contains entries obtained
from measurements in the images, as well
as the unknown scale factors - ij , which have to be
estimated. The matrix is known as the measurement
matrix. Assuming that these are known, the
camera matrices, the 3D points and the 3D directions
can be obtained by factorizing S. This can
be done from the singular value decomposition of
are orthogonal matrices
and \Sigma is a diagonal matrix containing the
singular values, oe i , of S. Let ~
and let ~
U and ~
V denote the -rst three columns of
U and V , respectively. Then6 4
A p7
U
~
\Sigma and
\Theta
~
ful-l (26). Observe that the whole singular value
decomposition of S is not needed. It is suOEcient to
calculate the three largest eigenvalues and the corresponding
eigenvectors of SS T . The only missing
component is the scale factors - ij for the lines.
These can be obtained in the following way.
Assume that T IJK or T r
IJK has been calcu-
lated. Then the camera matrices can be calculated
from Proposition 3 or Proposition 5. It follows
from (22) that once the camera matrices for three
images are known, the scale factors for each direction
can be calculated up to an unknown scale
factor. It remains to estimate the scale factors
for all images with a consistent scale. We have
chosen the following method. Consider the -rst
three views with camera matrices A 1
and A 3
Rewriting (22) as
A 3
d
shows that M in (28) has rank less than 4 which
implies that all 4 \Theta 4 minors are equal to zero.
These minors give linear constraints on the scale
factors. However, only 3 of them are independent.
So a system with the following appearance is obtained
54
where indicates a matrix entry that can be calculated
from A i and d i . It is evident from (29)
that the scale factors - i only can be calculated up
to an unknown common scale factor. By considering
another triplet, with two images in common
with the -rst triplet, say the last two, we can obtain
consistent scale factors for both triplets by
solving a system with the following appearance,6 6 6 6 6 6 400
In practice, all minors of M in (28) should be used.
This procedure is easy to systematize such that
all scale factors from the direction of one line can
be computed as the nullspace of a single matrix.
The drawback is of course that we -rst need to
compute all camera matrices of the sequence. An
alternative would be to reconstruct the 3D direction
D from one triplet of images according to (22)
and then use this direction to solve for the scale
factors in the other images.
In summary, the following algorithm is proposed
1. Calculate the scale factors - ij using T IJK or
IJK .
2. Calculate S in (26) from - ij and the image
measurements.
3. Calculate the singular value decomposition of
S.
4. Estimate the camera matrices and the reconstruction
of points and line directions according
to (27).
5. Reconstruct 3D lines and 3D quadrics.
The last step needs a further comment. From
the factorization, the 3D directions of the lines and
the centres of the quadrics are obtained. The remaining
unknowns can be recovered linearly from
(5) for lines and (10) for quadrics.
Now to the question of how to incorporate all
available constraints for the conics. Given that the
quadrics in space are disk quadrics, the following
modi-cation of the above algorithm can be done.
Consider a triplet of images, with known matching
constraints. Choose a point on a conic curve in
the -rst image, and then use the epipolar lines in
the other two images to get the point-point correspondences
on the other curves. In general, there
is a two-fold ambiguity since an epipolar line intersects
a conic at two points. The ambiguity is
solved by examining the epipolar lines between the
second and third image in the triplet. Repeating
this procedure, point correspondences on the conic
curves can be obtained throughout the sequence,
and used in the factorization method as ordinary
points.
5. Closure constraints
The drawback of all factorization methods is the
diOEculty in handling missing data, i.e., when all
features are not visible in all images. In this sec-
tion, an alternative method based on closure con-
straints, is presented that can handle missing data
in a uni-ed manner. Two related methods are also
discussed.
Given the centred aOEne tensor and the centred
aOEne epipoles, it is possible to calculate a representative
for the three camera matrices. Since the
reconstruction and the camera matrices are determined
up to an unknown aOEne transformation,
only a representative can be calculated that dioeers
from the true camera matrices by an aOEne trans-
formation. When an image sequence with more
than three images is treated, it is possible to -rst
calculate a representative for the camera matrices
and A 3
, and a representative for A 2
, A 3
and A 4
and then merge these together. This is not
a good solution since errors may propagate uncontrollably
from one triplet to another. It would be
better to use all available combinations of aOEne
tensors and calculate all camera matrices at the
same time. The solution to this problem is to use
the closure constraints.
There are two dioeerent types of closure constraints
in the aOEne case springing from the two-view
and three-view constraints. To obtain the
second order constraint, start by stacking camera
matrices A I and A J like in (12), which results in
a 4 \Theta 3 matrix. Duplicate one of the columns to
obtain a 4 \Theta 4 matrix
A I A n
I
A J A n
where A n
I denotes the n:th column of A I . Since
B IJ is a singular matrix (a repeated column), we
have det B by the last
column, for
\Theta
\Theta
where IJ e 1 etc. denote the centred aOEne epipoles.
Thus (30) gives one linear constraint on the camera
matrices A I and A J .
To obtain the third order type of closure constraints
consider the matrix T IJK de-ned in (16)
for the camera matrices A I , A J and AK and duplicate
one of the columns to obtain a 6 \Theta 4 matrix
A I A n
I
A J A n
where again A n
I denotes the n:th column of A I .
Since C IJK has a repeated column it is rank de-
-cient, i.e., rank C 4. Expanding the 4 \Theta 4
minors of C IJK give three expressions, involving
only two cameras of the same type as (30) and 12
expressions involving all three cameras of the type
\Theta
\Theta
\Theta
Thus we get in total 15 linear constraints on the
camera matrices A I , A J and AK . However, there
are only 3 linearly independent constraints among
these 15, which can easily be checked by using a
computer algebra package, such as MAPLE. Some
of these constraints involve only components of
the reduced aOEne tensors, e.g., the one in (31),
making it possible to use the closure constraints
in the reduced case also.
To sum up, every second order combination of
centred aOEne epipoles gives one linear constraint
on the camera matrices and every third order combination
of aOEne tensors gives 12 additional linear
constraints on the camera matrices. Using all
available combinations, all the linear constraints
on the camera matrices can be stacked together
in a matrix M ,
Given a suOEcient number of constraints on the
camera matrices, they can be calculated linearly
from (32). Observe that the nullspace of M has
dimension 2, which implies that only the linear
space spanned by the columns of A can be de-
termined. This means that the camera matrices
can only be determined up to an unknown aOEne
transformation.
When only the second order combinations are
used, it is not suOEcient to use only the combinations
between every successive pair of images.
However, it is suOEcient to use the combinations
between views every i. This
can easily be seen from the fact that one new image
gives two new independent variables in the linear
system of equations in (32) and the two new
linear constraints balances this. When the third
order combinations are used, it is suOEcient to use
the tensor combinations between views
for every i, which again can be seen be counting
the number of unknowns and the number of linearly
independent constraints. This is also the
case for the reduced third order combinations.
The closure constraints bring the camera ma-
trices, A i , into the same aOEne coordinate
system. However, the last column in the
camera matrices, denoted by b i , cf. (3), needs also
to be calculated. These columns depend on the
chosen centroid for the relative coordinates. But if
the visible feature con-guration changes, as there
may be missing data, the centroid changes as well.
This has to be considered. For example, let x 01 ,
and X 0 denote the centroid of the visible
points in the images and in space for the -rst
three views, respectively, and let x 02
0 denote the centroid in the images and in
space for views two, three and four, respectively.
The centroids are projected as
This is a linear system in the unknowns b 1
0 . It is straightforward to generalize
the above equations for m consecutive images and
the system can be solved by a single SVD.
5.1. Related work
We examine two closely related algorithms for
dealing with missing data.
Tomasi and Kanade propose one method in
(Tomasi and Kanade 1992) to deal with the missing
data problem for point features. In their
method, one -rst locates a rectangular subset of
the measurement matrix S (26) with no missing
elements. Factorization is applied to this matrix.
Then, the initial sub-block is extended row-wise
(or column-wise) by propagating the partial structure
and motion solution. In this way, the missing
elements are -lled in iteratively. The result is -
nally re-ned using steepest descent minimization.
As pointed out by Jacobs (Jacobs 1997), their
solution seems like a reasonable heuristics, but the
method has several potential disadvantages. First,
the problem of -nding the largest full submatrix
of a matrix is NP-hard, so heuristics must be used.
Second, the data is not used in a uni-ed manner.
As only a small subset is used in the -rst fac-
torization, the initial structure and motion may
contain signi-cant inaccuracies. In turn, these errors
may propagate uncontrollably as additional
rows (or columns) are computed. Finally, the re-
-nements with steepest descent is not guaranteed
to converge to the globally optimal solution.
The method proposed in (Jacobs 1997) also
starts with the measurement matrix S using only
points. Since S should be of rank three, the m
columns of S span a 3-dimensional linear sub-
space, denoted L. Consequently, the span of any
three columns of S should intersect the subspace
L. If there are missing elements in any of the three
columns, the span of the triplet will be of higher
dimension. In that case, the constraint that the
subspace L should lie in the span of the triplet will
be a weaker one. In practise, Jacobs calculates the
nullspace of randomly chosen triplets, and -nally,
the solution is found by computing the nullspace
of the span of the previously calculated nullspaces,
using SVD.
Jacobs' method is closely related to the closure
constraints. It can be seen as the 'dual' of the closure
constraints, since it generates constraints by
picking columns in the measurement matrix, while
we generate constraints by using rows. Therefore,
a comparison based on numerical experiments has
been performed, which is presented in the experimental
section.
There are also signi-cant dioeerences. First, by
using matching tensors, lines can also contribute
to constraining the viewing geometry. Second, for
m points, there are
point triplets. In practice,
this is hard to deal with, so Jacobs heuristically
chooses a random subset of the triplets, without
knowing if it is suOEcient. With our method we
know that, e.g., it is suOEcient to use every consecutive
third order closure constraint. Finally,
Jacobs uses the visible point con-guration in adjacent
images to calculate the centroid. Since there
is missing data, this approximation often leads to
signi-cant errors (see experimental comparison).
However, one may modify Jacobs' method, so it
correctly compensates for the centroid. In order
to make a fair experimental comparison, we have
included a modi-ed version which properly handles
this problem. It works in the same manner
as the original one, but it does not use relative
coordinates. In turn, it has to compute a 4-
dimensional linear subspace of the measurement
matrix S. This modi-ed version generates constraints
by picking quadruples of columns in S.
Since there are
quadruples, the complexity is
much worse than the original one.
6. Experiments
The presented methods have been tested and evaluated
on both synthetic and real data.
6.1. Simulated data
All synthetic data was produced in the following
way. First, points, line segments and conics were
randomly distributed in space with coordinates
between \Gamma500 and +500 units. The camera positions
were chosen at a nominal distance around
1000 units from the origin and then all 3D features
were projected to these views and the obtained images
were around 500 \Theta 500 pixels. In order to test
the stability of the proposed methods, dioeerent
levels of noise were added to the data. Points were
perturbated with uniform, independent Gaussian
noise. In order to incorporate the higher accuracy
of the line segments, a number of evenly sampled
points on the line segments were perturbated with
independent Gaussian noise in the normal direction
of the line. Then, the line parameters were estimated
with least-squares. The conics were handled
similarly. The residual error for points was
chosen as the distance between the true point position
and the re-projected reconstructed 3D point.
For lines, the residual errors were chosen as the
smallest distances between the endpoints of the
true line segment and the re-projected 3D line.
For conics, the errors were measured with respect
to the centroid. These settings are close to real
life situations. All experiments were repeated 100
times and the results reAEect the average values.
Before the actual computations, all input data was
rescaled to improve numerical conditioning.
In
Table
1, it can be seen that the 20-parameter
formulation (the centred aOEne tensor and the centred
aOEne epipoles) of three views is in general
superior to the 12-parameter formulation (the reduced
aOEne tensors). For three views, factorization
gives slightly better results. All three methods
handle moderate noise perturbations well. In
Table
2 the number of points and lines is var-
ied. In general, the more number of points and
lines the better results, and the non-reduced representation
is still superior the reduced version.
Finally, in Table 3 the number of views is var-
ied. In this experiment, two variants of the factorization
method are tried and compared to the
method of closure constraints. The -rst one (I)
uses the centroid of the conic as a point feature,
and the second one uses, in addition, one point on
each conic curve, obtained by the epipolar transfer
(see Section 4). The -rst method appears more
robust than the second one, even though the second
method incorporates all the constraints of the
conic. Somewhat surprisingly, the method based
on closure constraints has similar performance as
the best factorization method 3 . The closure constraints
are of third order and only the tensors
between views used.
However, the dioeerences are minor between the
two methods and they both manage to keep the
residuals low.
Table
1. Result of simulations of 10 points and 10 lines in
3 images for dioeerent levels of noise using the third order
combination of aOEne tensors, the reduced third order combination
of aOEne tensors and the factorization approach.
The root mean square (RMS) errors are shown for the reduced
aOEne tensors T r
IJK , the non-reduced T IJK , and
factorization.
STD of noise
Red. aOEne tensors
RMS of points 0.0 3.3 8.4 7.7
RMS of lines 0.0 3.5 7.1 8.6
AOEne tensors
RMS of points 0.0 1.6 2.2 6.2
RMS of lines 0.0 1.7 2.3 8.3
Factorization
RMS of points 0.0 1.0 1.8 4.5
RMS of lines 0.0 1.1 2.1 6.8
Table
2. Results of simulation of 3 views with a dioeerent
number of points and lines and with a standard deviation
of noise equal to 1. The table shows the resulting error
(RMS) after using the reduced aOEne tensors T r
IJK , the
non-reduced T IJK , and factorization.
#points, #lines 3,3 5,5 10,10 20,20
Red. aOEne tensors
RMS of points 1.0 1.5 1.6 2.0
RMS of lines 3.9 1.5 1.2 1.7
AOEne tensors
RMS of points 1.0 1.6 1.0 1.2
RMS of lines 3.9 2.2 0.8 1.1
Factorization
RMS of points 1.0 1.1 0.9 0.9
RMS of lines 3.9 1.1 0.7 0.7
6.2. Real data
Two sets of images have been used in order evaluate
the dioeerent methods. The -rst set is used to
verify the performance on real images, and the second
set is used for a comparison with the method
of Jacobs.
6.2.1. Statue sequence A sequence of 12 images
was taken of an outdoor statue containing both
points, lines and conics. More precisely, the
statue consists of two ellipses lying on two dioeerent
planes in space and the two ellipses are connected
by straight lines, almost like a hyperboloid, see
Figure
1. There are in total 80 lines between the
ellipses. In total, four dioeerent experiments were
performed on these images.
In the -rst three experiments only 5 images were
used. In these images, 17 points, 17 lines and the
ellipses were picked out by hand in all images.
For the ellipses and the lines, the appropriate representations
were calculated by least-squares.
In the -rst experiment, only the second order
closure constraints between images i and i +1 and
between images i and used. The reconstructed
points, lines and conics were obtained by
intersection using the computed camera matrices.
The detected and re-projected features are shown
in
Figure
1.
In the second experiment, only the third order
closure constraints between images
used. The tensors were estimated from
both point, line and conic correspondences. The
camera matrices were calculated from the closure
constraints and the 3D features were obtained by
intersection. The detected and re-projected features
are shown in Figure 2 together with the re-constructed
3D model.
The third experiment was performed on the
same data as the -rst two, but the factorization
method was applied. In Figure 2, a comparison is
given for the three methods. The third order closure
constraints yield better results than the second
order constraints as expected. However, the
factorization method is outperformed by the third
order closure constraints which was unexpected.
Table
3. Table showing simulated results for
lines and 3 conics in a dioeerent number of views, with an
added error of standard deviation 1 for the factorization
approaches and using third order closure constraints. Factorization
I uses only conic centres, while Factorization II
uses an additional point on each conic curve.
Factorization I
RMS of points 0.84 0.73 0.69 0.65
RMS of lines 0.62 0.62 0.70 0.73
RMS of conics 1.00 0.76 0.78 0.76
Factorization II
RMS of points 0.87 1.00 1.25 1.59
RMS of lines 0.67 0.98 1.45 1.90
RMS of conics 1.02 1.07 1.43 1.71
Closure Constr.
RMS of points 0.86 0.75 0.68 0.65
RMS of lines 0.64 0.64 0.70 0.75
RMS of conics 1.20 0.84 0.86 0.85
100 200 300 400 500 600100200300400500100 200 300 400 500 600100200300400500Fig. 1. The second and fourth image of the sequence, with detected points lines and conics together with re-projected
points, lines and conics using the second order closure constraints.
Fig. 2. The second image of the sequence, with detected and re-projected points, lines and conics together with the
reconstructed 3D model using the third order closure constraints.
Fig. 3. Root mean square (RMS) error of second and third
order closure constraints, and factorization for -ve images
in the statue sequence.
The -nal experiment was performed on all 12
images of the statue. In these images, there is a
lot of missing data, i.e., all features are not visible
in all images. The reconstruction then has to
be based on the closure constraints. In the resulting
3D model, the two ellipses and the 80 lines
were reconstructed together with 80 points, see
Figure
4. The resulted structure and motion was
also re-ned using bundle adjustment techniques,
cf. (Atkinson 1996) to get, in a sense, an optimal
reconstruction, and compared to that of the original
one. To get an idea of the errors caused by
the aOEne camera model, the result was also used
as initialization for a bundle adjustment algorithm
based on the projective camera model. The comparison
is given in Figure 5, image per image. The
quality of the output from the method based on
the closure constraints is not optimal, but fairly
accurate. If further accuracy is required, it can
serve as a good initialization to a bundle adjustment
algorithm for the aOEne or the full projec-
tive/perspective model.
6.2.2. Box sequence As a -nal test, we have
compared our method to that of Jacobs, described
in (Jacobs 1997). Naturally, we can only use point
features, since Jacobs method is only valid for
that. As described in Section 5.1, it works by
-nding a rank three approximation of the measurement
matrix. Since this original version incorrectly
compensates for the translational com-
ponent, we have included a modi-ed version which
does this properly by -nding a rank four approx-
imation. We have used Jacobs' own implementation
in Matlab, for both versions.
As a test sequence, we have chosen the box
sequence, which was also used by Jacobs in his
paper. The sequence, which originates from the
Computer Vision Laboratory at the University
of Massachusetts, contains forty points tracked
across eight images. One frame is shown in Figure
6. We generated arti-cial occlusions, by assuming
that each point is occluded for some fraction
of the sequence. The fraction is randomly
chosen for each point from a uniform distribution.
These settings are the same as in (Jacobs 1997).
For Jacobs' algorithm, the maximum number of
triplets (quadruples) has been set to the actual
Fig. 4. The full reconstruction of the statue based on the
third order closure constraints.
Fig. 5. Root mean square (RMS) error of third order closure
constraints, aOEne bundle adjustment and projective
bundle adjustment for each image in the statue sequence.
number of available triplets (quadruples). How-
ever, this is only an upper limit. Jacobs chooses
triplets until the nullspace matrix of all triplets occupies
ten times as many columns as the original
measurement matrix. We have set this threshold
to 100 times. In turn, all possible third order closure
constraints for the sequence are calculated.
In
Figure
7, the result is graphed for Jacobs'
rank three approximation, rank four approximation
and the method of closure constraints. The
result for the rank three version is clearly biased.
The performance of the rank four and closure
based methods are similar up to about percent
missing data. With more missing data, the
closure method is superior. Based on this exper-
Fig. 6. One image of the box sequence.
0.5135fraction of occlusion
RMS
Jacobs, rank three
Jacobs, rank four
Closure constraints
Fig. 7. Averaged RMS error over 100 trials. The error is
plotted against the average fraction of frames in which a
point is occluded. The tested methods are Jacobs' rank
three and four methods and closure based method.
iment, the closure constraints are preferable both
in terms of stability and complexity.
7. Conclusions
In this paper, we have presented an integrated
approach to the structure and motion problem
for the aOEne camera model. Correspondences of
points, lines and conics have been handled in a
uni-ed manner to reconstruct the scene and the
camera positions. The proposed scheme is illustrated
on both simulated and real data.
Appendix
A
Proof: (of Proposition 2) The number of linearly
independent equations (in the components
of T IJK ) can be calculated as follows. The 15 linear
constraints obtained from the minors of M in
(15) are not linearly independent, i.e., there exists
non-trivial combinations of these constraints that
vanishes. Consider the matrix4
A I x I x I
A J x J x J
obtained from M by duplicating its last column.
This matrix is obviously of rank ! 5, implying
that all 5 \Theta 5 minors vanish. There are 6 such
minors and they can be written (using Laplacian
expansions) as linear equations in the previously
obtained linear constraints (minors from the -rst
four columns) with image coordinates (elements
from the last column) as coeOEcients. This gives 6
linear dependencies on the 15 original constraints,
called second order constraints. On the other
hand it is obvious that all linear constraints on
the originally obtained 15 constraints can be written
as the vanishing of minors from a determinant
of the form4
A I x I k 1
A J x J k 2
Hence the vector [
is a linear combination
of the other columns of the matrix and since
it has to be independent of A I , A J and AK , we
deduce that we have obtained all possible second
order linear constraints.
The process does not stop here, since these second
order constraints are not linearly indepen-
dent. This can be seen by considering the matrix4
A I x I x I x I
A J x J x J x J
Again Laplacian expansions give one third order
constraint. To sum up we have
linearly independent constraints for two corresponding
points. The similar reasoning as before
gives that all possible second order constraints has
been obtained.
Using three corresponding points we obtain 10
linearly independent constraints from the second
linearly independent constraints from
the third point. However, there are linear dependencies
among these 20 constraints. To see this
consider the matrix4
A I x I - x I
A J x J -
x J
x denotes the third point. Using Laplacian
expansions of the 5 \Theta 5 minors we obtain 6 bilinear
expressions in x and -
x with the components of the
third order combination of aOEne tensors as coeOE-
cients. Each such minor give a linear dependency
between the constraints, i.e., 6 second order con-
straints. Again there are third order constraints
obtained from4
A I x I x I -
x I
A J x J x J - x J
and 2A I x I - x I -
x I
A J x J -
giving in total 2 third order constraints. To sum
up we have independent
constraints. We note again that all possible linear
constraints have been obtained according to the
same reasoning as above.
The same analysis can be made for the case
of four point matches. First we have 10 linearly
independent constraints from each point (apart
from the -rst one) and each pair of corresponding
points give 4 second order linear constraints,
giving constraints. Then one
third order constraint can be obtained from the
determinant of4
A I x I - x I -
x I
A J x J -
where - x denote the fourth point, giving
linearly independent constraints for
four points. Again all possible constraints have
been obtained, which concludes the proof.
Remark. The rank condition rank M ! 4 is
equivalent to the vanishing of all 4 \Theta 4 minors of
M . These minors are algebraic equations in the
24 elements of M . These (non-linear) equations
de-ne a variety in 24 dimensional space. The dimension
of this variety is a well-de-ned number, in
this case 21, which means that the co-dimension
is 3. This means that, in general (at all points
on the variety except for a subset of measure zero
in the Zariski topology), the variety can locally
be described as the vanishing of three polynomial
equations. This can be seen by making row and
column operations on M until it has the following
structure 2
where p, q and r are polynomial expressions in the
entries of M . The matrix above has rank ! 4 if
and only if
equations de-ne the variety locally. The points
on the variety where the rank condition can not
locally be described by three algebraic equations
are the ones where all of the 3 \Theta 3 minors of M
vanishes, which is a closed (and hence of measure
zero) subset in the Zariski topology.
Remark. Since we are interested in linear con-
straints, we obtain 10 linearly independent equations
instead of the 3 so-called algebraically independent
equations described in the previous re-
mark. However, one can not select 10 such constraints
in advance that will be linearly independent
for every point match. Therefore, in numerical
computations, it is better to use all of them.
Proof: (of Proposition 4) It is easy to see that
there are no second (or higher) order linear constraints
involving only the 4 constraints in (23).
Neither are there any higher order constraints for
the two sets of (23) involving two dioeerent points,
x and - x. Finally, for four dioeerent points, there
can be no more than 11 linearly independent con-
straints, since according to (24) the matrix containing
all constraints has a non-trivial null-space.
Notes
1. Again the choice of de-ning a contravariant tensor is
arbitrarily made. In fact, the tensor could have been
de-ned covariantly as
I
K7which is the one used in (Quan and Kanade 1997).
Transformations between these representations (and
other intermediate ones such as covariant in one index
and contravariant in the other ones) can easily be made.
2. The tensor t ijk can also be used to transfer directions
seen in two of the three images to a direction in the
third one, using the mixed form t i
jk according to
3. This has been con-rmed under various imaging condi-
tions, like e.g., closely spaced images.
--R
Close Range Photogrammetry and Machine Vision
Use your hand as a 3-d mouse
What can be seen in three dimensions with an uncalibrated stereo rig?
Geometry and Algebra of Multipe Projective Transformations
Structure and motion from points
Theory of Reconstruction from Image Motion
A unifying framework for structure and motion recovery from image sequences
Geometric invariance in Computer Vision
AOEne structure from line correspondences with uncalibrated aOEne cam- eras
A new linear method for euclidean motion/structure from three calibrated aOEne views
Algebraic Projective Geometry
AOEne Analysis of Image Sequences
3d motion recovery via aOEne epipolar geometry
Relative aOEne struc- ture: Canonical model for 3d from 2d geometry and applications
Simultaneous reconstruction of scene structure and camera locations from uncalibrated image sequences
A factorization based algorithm for multi-image projective structure and mo- tion
Shape and motion from image streams under orthography: a factorization method
Motion Segmentation and Outlier Detec- tion
Factorization methods for projective structure and motion
Linear projective reconstruction from matching tensors
Motion and structure from line correspondences: Closed-form solution
--TR
Motion and Structure from Line Correspondences; Closed-Form Solution, Uniqueness, and Optimization
Shape and motion from image streams under orthography
Geometric invariance in computer vision
Conics-based stereo, motion estimation, and pose determination
Affine analysis of image sequences
3D motion recovery via affine epipolar geometry
Relative Affine Structure
Affine Structure from Line Correspondences With Uncalibrated Affine Cameras
What can be seen in three dimensions with an uncalibrated stereo rig
A Factorization Based Algorithm for Multi-Image Projective Structure and Motion
Use Your Hand as a 3-D Mouse, or, Relative Orientation from Extended Sequences of Sparse Point and Line Correspondences Using the Affine Trifocal Tensor
Structure and Motion from Points, Lines and Conics with Affine Cameras
Linear Fitting with Missing Data
Factorization Methods for Projective Structure and Motion
A New Linear Method for Euclidean Motion/Structure from Three Calibrated Affine Views
A unifying framework for structure and motion recovery from image sequences
Simultaneous Reconstruction of Scene Structure and Camera Locations from Uncalibrated Image Sequences
--CTR
Yi Ma , Kun Huang , Ren Vidal , Jana Koeck , Shankar Sastry, Rank Conditions on the Multiple-View Matrix, International Journal of Computer Vision, v.59 n.2, p.115-137, September 2004
Leo Reyes , Eduardo Bayro-Corrochano, Simultaneous and Sequential Reconstruction of Visual Primitives with Bundle Adjustment, Journal of Mathematical Imaging and Vision, v.25 n.1, p.63-78, July 2006
Fredrik Kahl , Anders Heyden , Long Quan, Minimal Projective Reconstruction Including Missing Data, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.4, p.418-424, April 2001
Jacob Goldberger, Reconstructing camera projection matrices from multiple pairwise overlapping views, Computer Vision and Image Understanding, v.97 n.3, p.283-296, March 2005
Nicolas Guilbert , Adrien Bartoli , Anders Heyden, Affine Approximation for Direct Batch Recovery of Euclidian Structure and Motion from Sparse Data, International Journal of Computer Vision, v.69 n.3, p.317-333, September 2006
Pei Chen , David Suter, An Analysis of Linear Subspace Approaches for Computer Vision and Pattern Recognition, International Journal of Computer Vision, v.68 n.1, p.83-106, June 2006
Pei Chen , David Suter, A Bilinear Approach to the Parameter Estimation of a General Heteroscedastic Linear System, with Application to Conic Fitting, Journal of Mathematical Imaging and Vision, v.28 n.3, p.191-208, July 2007
Hayman , Torfi Thrhallsson , David Murray, Tracking While Zooming Using Affine Transfer and Multifocal Tensors, International Journal of Computer Vision, v.51 n.1, p.37-62, January | closure constraints;matching constraints;reconstruction;affine cameras;multiple view tensors;factorization methods |
335180 | Large Occlusion Stereo. | A method for solving the stereo matching problem in the presence of large occlusion is presented. A data structurethe disparity space imageis defined to facilitate the description of the effects of occlusion on the stereo matching process and in particular on dynamic programming (DP) solutions that find matches and occlusions simultaneously. We significantly improve upon existing DP stereo matching methods by showing that while some cost must be assigned to unmatched pixels, sensitivity to occlusion-cost and algorithmic complexity can be significantly reduced when highly-reliable matches, or ground control points, are incorporated into the matching process. The use of ground control points eliminates both the need for biasing the process towards a smooth solution and the task of selecting critical prior probabilities describing image formation. Finally, we describe how the detection of intensity edges can be used to bias the recovered solution such that occlusion boundaries will tend to be proposed along such edges, reflecting the observation that occlusion boundaries usually cause intensity discontinuities. | Introduction
Our world is full of occlusion. In any scene, we are
likely to find several, if not several hundred, occlusion
edges. In binocular imagery, we encounter occlusion
times two. Stereo images contain occlusion edges that
are found in monocular views and occluded regions
that are unique to a stereo pair[ 7]. Occluded regions
are spatially coherent groups of pixels that can be seen
Aaron Bobick was at the MIT Media Laboratory when the work
was performed.
in one image of a stereo pair but not in the other. These
regions mark discontinuities in depth and are important
for any process which must preserve object boundaries,
such as segmentation, motion analysis, and object iden-
tification. There is psychophysical evidence that the
human visual system uses geometrical occlusion relationships
during binocular stereopsis[ 27, 24, 1]
to reason about the spatial relationships between objects
in the world. In this paper we present a stereo
algorithm that does so as well.
Although absolute occlusion sizes in pixels depend
upon the configuration of the imaging system, images
Fig. 1. Noisy stereo pair of a man and kids. The largest occlusion region in this image is 93 pixels wide, or 13 percent of the image.
of everyday scenes often contain occlusion regions
much larger than those found in popular stereo test im-
agery. In our lab, common images like Figure 1 contain
disparity shifts and occlusion regions over eighty pixels
wide. 1 Popular stereo test images, however, like the
JISCT test set[ 9], the "pentagon" image, the "white
house" image, and the "Renault part" image have maximum
occlusion disparity shifts on the order of 20 pixels
wide. Regardless of camera configuration, images
of the everyday world will have substantially larger
occlusion regions than aerial or terrain data. Even processing
images with small disparity jumps, researchers
have found that occlusion regions are a major source
of error[ 3].
Recent work on stereo occlusion, however, has
shown that occlusion processing can be incorporated
directly into stereo matching[ 7, 17, 14, 20]. Stereo
imagery contains both occlusion edges and occlusion
regions[ 17]. Occlusion regions are spatially coherent
groups of pixels that appear in one image and not in
the other. These occlusion regions are caused by occluding
surfaces and can be used directly in stereo and
occlusion reasoning. 2
This paper divides into two parts. The first several
sections concentrate on the recovery of stereo matches
in the presence of significant occlusion. We begin by
describing previous research in stereo processing in
which the possibility of unmatched pixels is included
in the matching paradigm. Our approach is to explicitly
model occlusion edges and occlusion regions and
to use them to drive the matching process. We develop
a data structure which we will call the disparity-space
image (DSI), and we use this data structure to describe
the the dynamic-programming approach to stereo (as in
that finds matches and occlusions simul-
taneously. We show that while some cost must be incurred
by a solution that proposes unmatched pixels, an
algorithm's occlusion-cost sensitivity and algorithmic
complexity can be significantly reduced when highly-reliable
matches, or ground control points (GCPs), are
incorporated into the matching process. Experimental
results demonstrate robust behavior with respect to
occlusion pixel cost if the GCP technique us employed.
The second logical part of the paper is motivated
by the observation that monocular images also contain
information about occlusion. Different objects in the
world have varying texture, color, and illumination.
Therefore occlusion edges - jump edges between
these objects or between significantly disparate parts
of the same object - nearly always generate intensity
edges in a monocular image. The final sections of this
paper consider the impact of intensity edges on the disparity
space images and extends our stereo technique
to exploit information about intensity discontinuities.
We note that recent psychophysical evidence strongly
supports the importance of edges in the perception of
occlusion.
2. Previous Occlusion and Stereo Work
Most stereo researchers have generally either ignored
occlusion analysis entirely or treated it as a secondary
process that is postponed until matching is completed
and smoothing is underway[ 4, 15]. A few authors
have proposed techniques that indirectly address
the occlusion problem by minimizing spurious mis-matches
resulting from occluded regions and disconti-
Belhumeur has considered occlusion in several pa-
pers. In [ 7], Belhumeur and Mumford point out that
occluded regions, not just occlusion boundaries, must
be identified and incorporated into matching. Using
this observation and Bayesian reasoning, an energy
functional is derived using pixel intensity as the matching
feature, and dynamic programming is employed to
find the minimal-energy solution. In [ 5] and [ 6]
the Bayesian estimator is refined to deal with sloping
and creased surfaces. Penalty terms are imposed for
proposing a break in vertical and horizontal smoothness
or a crease in surface slope. Belhumeur's method
requires the estimation of several critical prior terms
which are used to suspend smoothing operations.
Geiger, Ladendorf, and Yuille[ 17, 18] also directly
address occlusion and occlusion regions by defining an
a priori probability for the disparity field based upon a
smoothness function and an occlusion constraint. For
matching, two shifted windows are used in the spirit of
[ 25] to avoid errors over discontinuity jumps. Assuming
the monotonicity constraint, the matching problem
is solved using dynamic programming. Unlike in Bel-
humeur's work, the stereo occlusion problem is formulated
as a path-finding problem in a left-scanline
to right-scanline matching space. Geiger et al. make
explicit the observation that "a vertical break (jump)
in one eye corresponds to a horizontal break (jump) in
the other eye."
Finally, Cox et al.[ 14] have proposed a dynamic programming
solution to stereo matching that does not require
the smoothing term incorporated into Geiger and
Belhumeur's work. They point out that several equally
good paths can be found through matching space using
only the occlusion and ordering constraints. To provide
enough constraint for their system to select a single so-
lution, they optimize a Bayesian maximum-likelihood
cost function minimizing inter- and intra-scanline disparity
discontinuities. The work of Cox et al. is the
closest to the work we present here in that we also do
not exploit any explicit smoothness assumptions in our
DP solution.
3. The DSI Representation
In this section we describe a data structure we call the
disparity-space image, or DSI. We have used the data
structure to explore the occlusion and stereo problem
and it facilitated our development of a dynamic programming
algorithm that uses occlusion constraints.
The DSI is an explicit representation of matching
space; it is related to figures that have appeared in
previous work [ 25, 28, 13, 17, 18].
3.1. DSI Creation for Ideal Imagery
We generate the DSI representation for i th scanline in
the following way: Select the i th scanline of the left
and right images, s L
and s R
respectively, and slide
them across one another one pixel at a time. At each
step, the scanlines are subtracted and the result is entered
as the next line in the DSI. The DSI representation
stores the result of subtracting every pixel in s L
i with
every pixel s R
and maintains the spatial relationship
between the matched points. As such, it may be considered
an (x, disparity) matching space, with x along
the horizontal, and disparity d along the vertical. Given
two images I L and I R the value of the DSI is given:
DSI L
when \Gammad
being the horizontal size of the image. The superscript
of L on DSI L indicates the left DSI. DSI R
i is simply a
skewed version of the DSI L
The above definition generates a "full" DSI where
there is no limit on disparity. By considering camera
geometry, we can crop the representation. In the case
of parallel optic axes, objects are shifted to the right
in the left image. No matches will be found searching
in the other direction. Further, if a maximum possible
disparity dmax is known, then no matches will be
found by shifting right more than dmax pixels. These
limitations permit us to crop the top N and bottom
lines of the DSI. DSI generation is illustrated
in Figure 2.
3.2. DSI Creation for Imagery with Noise
To make the DSI more robust to effects of noise, we
can change the comparison function from subtraction
to correlation. We define g L
i and g R
i as the groups of
scanlines centered around s L
and s R
and g R
are shifted across each other to generate the DSI
representation for scanline i. Instead of subtracting at
a single pixel, however, we compare mean-normalized
windows in g L and
Right Image
Left Image
Crop
One pixel overlap
(negative disparity)
Two pixel overlap
(negative disparity)
pixel overlap
(zero disparity)
One pixel overlap
(positive disparity)
Slide Right over Left
and subtract
scanline
scanline
Fig. 2. This figure describes how a DSI L
i is generated. The corresponding epipolar scanlines from the left and right images are used. The
scanline from the left image is held still as the scanline from the right image is shifted across. After each pixel shift, the scanlines are absolute
differenced. The result from the overlapping pixels is placed in the resulting DSI L
. The DSI L
i is then cropped, since we are only interested in
disparity shifts that are zero or greater since we assume we have parallel optical axis in our imaging system.
(wy \Gammac y )
s=\Gammac y
(wx \Gammac x )
[(I
(2)
where w x \Thetaw y is the size of the window, (c x ; c y ) is the
location of the reference point (typically the center) of
the window, and M L (M R ) is the mean of the window
in the left (right) image:
(wy \Gammac y )
s=\Gammac y
(wx \Gammac x )
I L (x+t; y+s)
Normalization by the mean eliminates the effect of
any additive bias between left and right images. If
there is a multiplicative bias as well, we could perform
normalized correlation instead [ 19].
Using correlation for matching reduces the effects of
noise. However, windows create problems at vertical
and horizontal depth discontinuities where occluded
regions lead to spurious matching. We solve this problem
using a simplified version of adaptive windows[
22]. At every pixel location we use 9 different windows
to perform the matching. The windows are shown in
Figure
3. Some windows are designed so that they will
match to the left, some are designed to match to the
right, some are designed to match towards the top, and
so on. At an occlusion boundary, some of the filters
will match across the boundary and some will not. At
each pixel, only the best result from matching using
all 9 windows is stored. Bad matches resulting from
occlusion tend to be discarded. If we define C x ; C y to
be the possible window reference points c x ; c y , respec-
tively, then DSI L
i is generated by:
DSI L
min
To test the correlation DSI and other components
of our stereo method, we have produced a more interesting
version of the three-layer stereo wedding cake
image frequently used by stereo researchers to assess
algorithm performance. Our cake has three square lay-
ers, a square base, and two sloping sides. The cake is
"iced" with textures cropped from several images. A
side view of a physical model of the sloping wedding
cake stereo pair is shown in Figure 4a, a graph of the
Fig. 3. To reduce the effects of noise in DSI generation, we have
used 9 window matching, where window centers (marked in black)
are shifted to avoid spurious matches at occlusion regions and discontinuity
jumps.
depth profile of a scanline through the center of the
cake is shown in Figure 4b, and a noiseless simulation
of the wedding cake stereo pair is shown in Figure 4c.
The sloping wedding cake is a challenging test example
since it has textured and homogeneous regions,
huge occlusion jumps, a disparity shift of 84 pixels for
the top level, and flat and sloping regions. The en-
hanced, cropped DSI for the noiseless cake is shown
in Figure 4d. Note that this is a real, enhanced image.
The black-line following the depth profile has not been
added, but results from enhancing near-zero values.
A noisy image cake was generated with Gaussian
white noise The DSI generated for the
noisy cake is displayed in Figure 4e. Even with large
amounts of noise, the "near-zero" dark path through
the DSI disparity space is clearly visible and sharp
discontinuities have been preserved.
3.3. Structure of the DSI
Figure 4d shows the cropped, correlation DSI for a
scanline through the middle of the test image pair
shown in Figure 4c. Notice the characteristic streaking
pattern that results from holding one scanline still
and sliding the other scanline across. When a textured
region on the left scanline slides across the corresponding
region in the right scanline, a line of matches can
be seen in the DSI L
. When two texture-less matching
regions slide across each other, a diamond-shaped region
of near-zero matches can be observed. The more
homogeneous the region is, the more distinct the resulting
diamond shape will be. The correct path through
DSI space can be easily seen as a dark line connecting
block-like segments.
4. Occlusion Analysis and DSI Path Constraint
In a discrete formulation of the stereo matching prob-
lem, any region with non-constant disparity must have
associated unmatched pixels. Any slope or disparity
jump creates blocks of occluded pixels. Because
of these occlusion regions, the matching zero path
through the image cannot be continuous. The regions
labeled "D" in Figure 4d mark diagonal gaps in the
enhanced zero line in DSI L
. The regions labeled "V"
mark vertical jumps from disparity to disparity. These
jumps correspond to left and right occlusion regions.
We use this "occlusion constraint"[ 17] to restrict the
a) b) c)
d)
Fig. 4. This figure shows (a) a model of the stereo sloping wedding cake that we will use as a test example, (b) a depth profile through the center
of the sloping wedding cake, (c) a simulated, noise-free image pair of the cake, (d) the enhanced, cropped, correlation DSI L
representation
for the image pair in (c), and (e) the enhanced, cropped, correlation DSI for a noisy sloping wedding cake In (d), the regions
labeled "D" mark diagonal gaps in the matching path caused by regions occluded in the left image. The regions labeled "V" mark vertical
jumps in the path caused by regions occluded in the right image.
type of matching path that can be recovered from each
DSI i . Each time an occluded region is proposed, the
recovered path is forced to have the appropriate vertical
or diagonal jump.
The fact that the disparity path moves linearly
through the disparity gaps does not imply that we
presume the a linear interpolation of disparities or a
smooth interpolation of depth in the occluded regions.
Rather, the line simply reflects the occlusion constraint
that a set of occluded pixels must be accounted for by
a disparity jump of an equal number of pixels.
Nearly all stereo scenes obey the ordering constraint
(or monotonicity constraint [ object a is to the
left of object b in the left image then a will be to the left
of b in the right image. Thin objects with large disparities
violate this rule, but they are rare in many scenes
of interest. Exceptions to the monotonicity constraint
and a proposed technique to handle such cases is given
in [ 16]. By assuming the ordering rule we can impose
a second constraint on the disparity path through
the DSI that significantly reduces the complexity of
the path-finding problem. In the DSI L
, moving from
left to right, diagonal jumps can only jump forward
(down and across) and vertical jumps can only jump
backwards (up).
It is interesting to consider what happens when the
ordering constraint does not hold. Consider an example
of skinny pole or tree significantly in front of a
building. Some region of the building will be seen in
the left eye as being to the left of the pole, but in the
right eye as to the right of the pole. If a stereo system
is enforcing the ordering constraint it can generate two
possible solutions. IN one case it can ignore the pole
completely, considering the pole pixels in the left and
right image as simply noise. More likely, the system
will generate a surface the extends sharply forward
to the pole and then back again to the background.
The pixels on these two surfaces would actually be
the same, but the system would consider them as un-
matched, each surface being occluded from one eye by
the pole. Later, where we describe the effect of ground
control points, we will see how our system chooses
between these solutions.
5. Finding the Best Path
Using the occlusion constraint and ordering constraint,
the correct disparity path is highly constrained. From
any location in the DSI L
i , there are only three directions
a path can take - a horizontal match, a diagonal
occlusion, and a vertical occlusion. This observation
allows us to develop a stereo algorithm that integrates
matching and occlusion analysis into a single process.
However, the number of allowable paths obeying
these two constraints is still huge. 3 As noted by previous
researchers [ 17, 14, 18] one can formulate
the task of finding the best path through the DSI as a
dynamic programming (DP) path-finding problem in
disparity) space. For each scanline i, we wish
to find the minimum cost traversal through the DSI i
image which satisfies the occlusion constraints.
5.1. Dynamic Programming Constraints
DP algorithms require that the decision making process
be ordered and that the decision at any state depend
only upon the current state. The occlusion constraint
and ordering constraint severely limit the direction the
path can take from the path's current endpoint. If we
base the decision of which path to choose at any pixel
only upon the cost of each possible next step in the
path and not on any previous moves we have made, we
satisfy the DP requirements and can use DP to find the
optimal path.
As we traverse through the DSI image constructing
the optimal path, we can consider the system as being in
any one of three states: match (M), vertical occlusion
(V), or diagonal occlusion (D). Figure 5 symbolically
shows the legal transitions between each type of state.
We assume, without loss of generality, that the traversal
starts at one of the top corners of the DSI.
The application of dynamic programming to the
stereo problem reveals the power of these techniques
formulated as a DP problem,
finding the best path through an DSI of width N and
disparity range D requires considering N D dynamic
programming nodes (each node being a potential place
along the path). For the 256 pixel wide version of
the sloping wedding cake example, the computation
considers 11,520 nodes.
To apply DP a cost must be assigned to each (DSI)
pixel in the path depending upon its state. As indi-
cated, a pixel along a path is either in one of the two
"occlusion" states - vertical or diagonal - or is a
"matched" pixel. The cost we assign to the matched
pixels is simply the absolute value of the DSI L
i pixel at
the match point. 4 The better the match, the lower the
cost assessed. Therefore the algorithm will attempt to
maximize the number of "good" matches in the final
path.
However, the algorithm is also going to propose unmatched
points - occlusion regions - and we need
to assign a cost for unmatched pixels in the vertical
and diagonal jumps. Otherwise the "best path" would
be one that matches almost no pixels, and traverses the
DSI alternating between vertical and diagonal occlusion
regions.
5.2. Assigning occlusion cost
Unfortunately, slight variations in the occlusion pixel
cost can change the globally minimum path through
the DSI L
space, particularly with noisy data[ 14]. Because
this cost is incurred for each proposed occluded
pixel, the cost of proposed occlusion region is linearly
proportional to the width of the region. Consider the
example illustrated in Figure 6. The "correct" solution
is the one which starts at region A, jumps forward diagonally
6 pixels to region B where disparity remains
constant for 4 pixels, and then jumps back vertically 6
pixels to region C. The occlusion cost for this path is
is the pixel occlusion cost. If the c
is too great, a string of bad matches will be selected as
the lower-cost path, as shown. The fact that previous
DP solutions to stereo matching (e.g. [ 18]) present
results where they vary the occlusion cost from one
example to the next indicates the sensitivity of these
approaches to this parameter.
In the next section we derive an additional constraint
which greatly mitigates the effect of the choice of the
occlusion cost c o . In fact, all the results of the experiments
section use the same occlusion cost across
widely varying imaging conditions.
5.3. Ground control points
In order to overcome this occlusion cost sensitivity,
we need to impose another constraint in addition to
the occlusion and ordering constraints. However, unlike
previous approaches we do not want to bias the
solution towards any generic property such as smooth-
Current state &
Location
Match state
Vertical occlusion
Horizontal occlusion
d j-1
d j+1
Fig. 5. State diagram of legal movesthe DP algorithm can makewhen processingthe DSI L
. From the match state, the path can move vertically
up to the vertical discontinuity state, horizontally to the match state, or diagonally to the diagonal state. From the vertical state, the path can
move vertically up to the vertical state or horizontally to the match state. From the diagonal state, the path can move horizontally to the match
state or diagonally to the diagonal state.
ness across occlusions[ 17], inter-scanline consistency[
25, 14], or intra-scanline "goodness"[ 14].
Instead, we use high confidence matching guesses:
Ground control points (GCPs). These points are used
to force the solution path to make large disparity jumps
that might otherwise have been avoided because of
large occlusion costs. The basic idea is that if a few
matches on different surfaces can identified before the
DP matching process begins, these points can be used
to drive the solution.
Figure
7 illustrates this idea showing two GCPs and
a number of possible paths between them. We note that
regardless of the disparity path chosen, the discrete lattice
ensures that path-a, path-b, and path-c all require
6 occlusion pixels. Therefore, all three paths incur
the same occlusion cost. Our algorithm will select the
path that minimizes the cost of the proposed matches
independent of where occlusion breaks are proposed
and (almost) independent of the occlusion cost value.
If there is a single occlusion region between the GCPs
in the original image, the path with the best matches is
similar to path-a or path-b. On the other hand, if the
region between the two GCPs is sloping gently, then
a path like path-c, with tiny, interspersed occlusion
jumps will be preferred, since it will have the better
matches. 5 The path through (x, disparity) space, there-
fore, will be constrained solely by the occlusion and
ordering constraints and the goodness of the matches
between the GCPs.
Of course, we are limited to how small the occlusion
cost can be. If it is smaller than the typical value
of correct matches (non-zero due to noise) 6 then the
algorithm proposes additional occlusion regions such
as in path-d of Figure 7. For real stereo images (such
as the JISCT test set [ 9]) the typical DSI value for
incorrectly matched pixels is significantly greater than
that of correctly matched ones and performance of the
algorithm is not particularly sensitive to the occlusion
cost.
Also, we note that while we have attempted to remove
smoothing influences entirely, there are situations
in which the occlusion cost induces smooth solu-
tions. If no GCP is proposed on a given surface, and if
the stereo solution is required to make a disparity jump
across an occlusion region to reach the correct disparity
level for that surface, then if the occlusion cost is high,
the preferred solution will be a flat, "smooth" surface.
As we will show in some of our results, even scenes
with thin surfaces separated by large occlusion regions
tend to give rise to an adequate number of GCPs (the
next section describes our method for selecting such
points). This experience is in agreement with results
indicating a substantial percentage of points in a stereo
pair can be matched unambiguously, such as Hannah's
5.4. Why ground control points provide additional
constraint
Before proceeding it is important to consider why
ground control points provide any additional constraint
to the dynamic programming solution. Given that they
Desired path
Path chosen if occlusion cost too high
Occluded Pixel
A
(light substituted
for occlusion pixels
(bold selected using good
matches and occlusion pixels
Fig. 6. The total occlusion cost for an object shifted D pixels can be cost occlusion D 2. If the cost becomes high, a string of bad matches
may be a less expensive path. To eliminate this undesirable effect, we must impose another constraint.
Ground Control Point
Occluded Pixel
Path A
Path C
Path D
Paths A,B, and C have 6 occluded
pixels.
Path D has 14 occluded pixels.
Fig. 7. Once a GCP has forced the disparity path through some disparity-shifted region, the occlusion will be proposed regardless of the cost
of the occlusion jump. The path between two GCPs will depend only upon the good matches in the path, since the occlusion cost is the same for
each path A,B, and C. path D is an exception, since an additional occlusion jump has been proposed. While that path is possible, it is unlikely
the globally optimum path through the space will have any more occlusion jumps than necessary unless the data supporting a second occlusion
jump is strong.
represent excellent matches and therefore have very
low match costs it is plausible to expect that the lowest
cost paths through disparity space would naturally
include these points. While this is typically the case
when the number of occlusion pixels is small compared
to the number of matched pixels, it is not true
in general, and is particularly problematic in situations
with large disparity regions.
Consider again Figure 6. Let us assume that region
represents perfect matches and therefore has a match
cost of zero. These are the types of points which will
normally be selected as GCPs (as described in the next
section). Whether the minimal cost path from A to C
will go through region B is dependent upon the relative
magnitude between the occlusion cost incurred via
the diagonal and vertical jumps required to get to region
B and the incorrect match costs of the horizontal
path from A to C. It is precisely this sensitivity to the
occlusion cost that has forced previous approaches to
dynamic programming solutions to enforce a smoothness
constraint.
5.5. Selecting and enforcing GCPs
If we force the disparity path through GCPs, their selection
must be highly reliable. We use several heuristic
filters to identify GCPs before we begin the DP pro-
Occluded Pixel
Multi-valued GCP
Path A Path B Path C
Prohibited Pixel
Fig. 8. The use of multiple GCPs per column. Each path through the two outside GCPs have exactly the same occlusion cost, 6co . As long as
the path passes through one of the 3 multi-GCPs in the middle column it avoids the (infinite) penalty of the prohibited pixels.
cessing; several of these are similar to those used by
Hannah [ 19] to find highly reliable matches. The first
heuristic requires that a control point be both the best
left-to-rightand best right-to-left match. In the DSI approach
these points are easy to detect since such points
are those which are the best match in both their diagonal
and vertical columns. Second, to avoid spurious
"good" matches in occlusion regions, we also require
that a control point have match value that is smaller
than the occlusion cost. Third, we require sufficient
texture in the GCP region to eliminate homogeneous
patches that match a disparity range. Finally, to further
reduce the likelihood of a spurious match, we exclude
any proposed GCPs that have no immediate neighbors
that are also marked as GCPs.
Once we have a set of control points, we force our
DP algorithm to choose a path through the points by
assigning zero cost for matching with a control point
and a very large cost to every other path through the
control point's column. In the DSI L
i , the path must
pass through each column at some pixel in some state.
By assigning a large cost to all paths and states in a
column other than a match at the control point, we have
guaranteed that the path will pass through the point.
An important feature of this method of incorporating
GCPs is that it allows us to have more than one GCP
per column. Instead of forcing the path through one
GCP, we force the path through one of a few GCPs in
a column as illustrated in Figure 8. Even if using multiple
windows and left-to-right, right-to-left matching,
it is still possible that we will label a GCP in error
if only one GCP per column is permitted. It is un-
likely, however, that none of several proposed GCPs in
a column will be the correct GCP. By allowing multiple
GCPs per column, we have eliminated the risk of
forcing the path through a point erroneously marked as
high-confidence due image noise without increasing
complexity or weakening the GCP constraint. This
technique also allows us to handle the "wallpaper"
problem of matching in the presence of a repeated pattern
in the scene: multiple GCPs allow the elements of
the pattern to repeatedly matched (locally) with high
confidence while ensuring a global minimum.
5.6. Reducing complexity
Without GCPs, the DP algorithm must consider one
node for every point in the DSI, except for the boundary
conditions near the edges. Specification of a GCP,
however, essentially introduces an intervening boundary
pointand prevents the solutionpath from traversing
certain regions of the DSI. Because of the occlusion
and monotonicity constraints, each GCP carves out
two complimentary triangles in the DSI that are now
not valid. Figure 9 illustrates such pairs of triangles.
The total area of the two triangles, A, depends upon at
what disparity d the GCP is located, but is known to
lie within the range D 2 =4 - A - D 2 =2 where D is
the allowed disparity range. For the 256 pixel wedding
cake image, 506 - A - 1012. Since the total number
of DP nodes for that image is 11,520 each GCP whose
constraint triangles do not overlap with another pair of
constraint triangles reduces the DP complexity
by about 10%. With several GCPs the complexity is
less than 25% of the original problem.
6. Results using GCPs
Input to our algorithm consists of a stereo pair. Epipolar
lines are assumed to be known and corrected to
correspond to horizontal scanlines. We assume that
Fig. 9. GCP constraint regions. Each GCP removes a pair of similar triangles from the possible solution path. If the GCP is at one extreme of
the disparity range (GCP 1), then the area excluded is maximized at D 2 =2. If the GCP is exactly in the middle of the disparity range (GCP 2)
the areas is minimized at D 2 =4.
additive and multiplicative photometric bias between
the left and right images is minimized allowing the use
of a subtraction DSI for matching. As mentioned, such
biases can be handled by using the appropriate correlation
operator. The birch tree example shows that the
subtraction DSI performs well even with significant
additive differences.
The dynamic programming portion of our algorithm
is quite fast; almost all time was spent in creating the
correlation DSI used for finding GCPs. Generation
time for each scanline depends upon the efficiency
of the correlation code, the number and size of the
masks, and the size of the original imagery. Running
on a HP 730 workstation with a 515x512 image using
nine 7x7 filters and a maximum disparity shift of 100
pixels, our current implementation takes a few seconds
per scanline. However, since the most time consuming
operations are simple window-based cross-correlation,
the entire procedure could be made to run near real
time with simple dedicated hardware. Furthermore,
this step was used solely to provide GCPs; a faster
high confidence match detector would eliminate most
of this overhead.
The results generated by our algorithm for the noise-free
wedding cake are shown in Figure 10a. Computation
was performed on the DSI L
i but the results have
been shifted to the cyclopean view. The top layer of
the cake has a disparity with respect to the bottom of
84 pixels. Our algorithm found the occlusion breaks
at the edge of each layer, indicated by black regions.
Sloping regions have been recovered as matched regions
interspersed with tiny occlusion jumps. Because
of homogeneous regions many paths have exactly the
same total cost so the exact assignment of occlusion
pixels in sloping regions is not identical from one scan-line
to the next, and is sensitive to the position of the
GCPs in that particular scanline. Figure 10b shows
the results for the sloping wedding cake with a high
amount of artificially generated noise noise
dB). The algorithm still performs well at locating
occlusion regions.
For the "kids" and "birch" results displayed in this
paper, we used a subtraction DSI for our matching
data. The 9-window correlation DSI was used only to
find the GCPs. Since our algorithm will work properly
using the subtraction DSI, any method that finds
highly-reliable matches could be used to find GCPs,
obviating the need for the computationally expensive
cross correlation. All our results, including the "kids"
and "birch" examples were generated using the same
occlusion cost, chosen by experimentation.
Figure
11a shows the "birch" image from the JISCT
stereo test set[ 9]. The occlusion regions in this image
are difficult to recover properly because of the
skinny trees, some texture-less regions, and a 15 percent
brightness difference between images. The skinny
trees make occlusion recovery particularly sensitive to
occlusion cost when GCPs are not used, since there are
relatively few good matches on each skinny tree compared
with the size of the occlusion jumps to and from
each tree. Figure 11b shows the results of our algorithm
without using GCPs. The occlusion cost prevented the
path on most scanlines from jumping out to some of
the trees. Figure 11c shows the algorithm run with the
same occlusion cost using GCPs. 7
Most of the occlusion regions around the trees are
recovered reasonably well since GCPs on the tree sur-
)Fig. 10. Results of our algorithm for the (a) noise-free and (b) noisy sloping wedding cake.
faces eliminated the dependence on the occlusion cost.
There are some errors in the image, however. Several
shadow regions of the birch figure are completely
washed-out with intensity values of zero. Conse-
quently, some of these regions have led to spurious
GCPs which caused incorrect disparity jumps in our final
result. This problem might be minimized by changing
the GCP selection algorithm to check for texture
wherever GCPs are proposed. On some scanlines, no
GCPs were recovered on some trees which led to the
scanline gaps in some of the trees.
Note the large occlusion regions generated by the
third tree from the left. This example of small foreground
object generating a large occlusion region is
a violation of the ordering constraint. As described
previously, if the DP solution includes the trees it cannot
also include the common region of the building.
If there are GCPs on both the building and the trees,
only one set of GCPs can be accommodated. Because
of the details of how we incorporated GCPs into the
DP algorithm, the surface with the greater number will
dominate. In the tree example, the grass regions were
highly shadowed and typically did not generate many
GCPs. 8
Figure
12a is an enlarged version of the left image
of
Figure
1. Figure 12b shows the results obtained
by the algorithm developed by Cox et al.[ 14]. The
Cox algorithm is a similar DP procedure which uses
inter-scanline consistency instead of GCPs to reduce
sensitivity to occlusion cost.
Figure
12c shows our results on the same image.
These images have not been converted to the cyclopean
view, so black regions indicate regions occluded
in the left image. The Cox algorithm does a reasonably
good job at finding the major occlusion regions,
although many rather large, spurious occlusion regions
are proposed.
When the algorithm generates errors, the errors are
more likely to propagate over adjacent lines, since
inter-and intra-scanline consistency are used[ 14]. To
be able to find the numerous occlusions, the Cox algorithm
requires a relatively low occlusion cost, resulting
in false occlusions. Our higher occlusion cost and use
of GCPs finds the major occlusion regions cleanly. For
example, the man's head is clearly recovered by our ap-
proach. The algorithm did not recover the occlusion
created by the man's leg as well as hoped since it found
no good control points on the bland wall between the
legs. The wall behind the man was picked up well by
our algorithm, and the structure of the people in the
scene is quite good. Most importantly, we did not use
any smoothness or inter- and intra-scanline consistencies
to generate these results.
We should note that our algorithm does not perform
as well on images that only have short match regions
interspersed with many disparity jumps. In such imagery
our conservative method for selecting GCPs fails
to provide enough constraint to recover the proper sur-
face. However, the results on the birch imagery illustrate
that in real imagery with many occlusion jumps,
there are likely to be enough stable regions to drive the
computation.
7. Edges in the DSI
Figure
13 displays the DSI L
i for a scanline from the
man and kids stereo pair in Figure 12; this particular
scanline runs through the man's chest. Both vertical
and diagonal striations are visible in the DSI data struc-
Fig. 11. (a) The "birch" stereo image pair, which is a part of the JISCT stereo test set[ 9], (b) Results of our stereo algorithm without using
GCPs, and (c) Results of of our algorithm with GCPs.
Fig. 12. Results of two stereo algorithms on Figure 1. (a) Original left image. (b) Cox et al. algorithm[ 14], and (c) the algorithm described
in this paper.
Fig. 13. A subtraction DSI L
for the imagery of Figure 12, where i is a scanline through the man's chest. Notice the diagonal and vertical
striations that form in the DSI L
due to the intensity changes in the image pair. These edge-lines appear at the edges of occlusion regions.
ture. These line-like striations are formed wherever a
large change in intensity (i.e. an "edge") occurs in the
left or right scan line. In the DSI L
i the vertical strid
Fig. 14. (a) A cropped, subtraction DSI L
. (b) The lines corresponding to the line-like striations in (a). (c) The recovered path. (d) The path
and the image from (b) overlayed. The paths along occlusions correspond to the paths along lines.
ations correspond to large changes in intensity in I L
and the diagonal striations correspond to changes in
I R . Since the interior regions of objects tend to have
less intensity variation than the edges, the subtraction
of an interior region of one line from an intensity edge
of the other tends to leave the edge structure in tact.
The persistence of the edge traces a linear structure in
the DSI. We refer to the lines in the DSI as "edge-lines."
As mentioned in the introduction, occlusion boundaries
tend to induce discontinuities in image intensity,
resulting in intensity edges. Recall that an occlusion
is represented in the DSI by the stereo solution path
containing either a diagonal or vertical jump. When
an occlusion edge coincides with an intensity edge,
then the occlusion gap in the DSI stereo solution will
coincide with the DSI edge-line defined by the corresponding
intensity edge. Figures 14a and 14b show a
DSI and the "edge-lines" image corresponding to the
line-like striations. Figure 14c displays the solution
recovered for that scanline, and Figure 14d shows the
recovered solution overlayed on the lines image. The
vertical and diagonal occlusions in the DSI travel along
lines appearing in the DSI edge-line image.
In the next section we develop a technique for incorporating
these lines into the dynamic programming
solution developed in the previous section. The goal is
to bias the solution so that nearly all major occlusions
proposed will have a corresponding intensity edge.
Before our stereo algorithm can exploit edge infor-
mation, we must first detect the DSI edge-lines. Line
detection in the DSI is a relatively simple task since,
in principal, an algorithm can search for diagonal and
vertical lines only. For our initial experiments, we
implemented such an edge finder. However, the computational
inefficiencies of finding edges in the DSI for
every scan line led us to seek a one pass edge detection
algorithm that would approximate the explicit search
for lines in every DSI.
Our heuristic is to use a standard edge-finding procedure
on each image of the original image pair and use
the recovered edges to generate an edge-lines image
for each DSI. We have used a simplified Canny edge
detector to find possible edges in the left and right
image[ 10] and combined the vertical components of
those edges to recover the edge-lines.
The use of a standard edge operator introduces a
constraint into the stereo solution that we purposefully
excluded until now: inter-scanline consistency.
Because any spatial operator will tend to find coherent
edges, the result of processing one scanline will
no longer be independent of its neighboring scanlines.
However, since the inter-scanline consistency is only
encouraged with respect to edges and occlusion, we
are willing to include this bias in return for the computationally
efficiency of single pass edge detection.
8. Using Edges with the DSI Approach
Our goal is to incorporate the DSI edge information
into the dynamic programming solution in such a way
as to 1) correctly bias the solution to propose occlusions
at intensity edges; 2) not violate the occlusion
ordering constraints developed previously; and not
significantly increase the computational cost of the
path-finding algorithm.
As shown, occlusion segments of the solution path
paths through the DSI usually occur along edge-lines
of the DSI. Therefore, a simple and effective strategy
for improving our occlusion finding algorithm that satisfies
our three criteria above is to reduce the cost of
an occlusion along paths in the DSI corresponding to
the edge-lines.
Figure
15 illustrates this cost reduction. Assume
that a GCP or a region of good matches is found on either
side of an occlusion jump. Edge-lines in the DSI,
corresponding to intensity edges in the scanlines, are
shown in the diagram as dotted lines. The light solid
lines show some possible paths consistent with the border
boundary constraints. If the cost of an occlusion is
significantly reduced along edge-lines, however, then
the path indicated by the dark solid line is least expen-
sive, and that path will place the occlusion region in
the correct location.
By reducing the cost along the lines, we improve occlusion
recovery without adding any additional computational
cost to our algorithm other than a pre-processing
computation of edges in the original image
pair. Matching is still driven by pixel data but is influ-
enced, where most appropriate, by edge information.
And, ground control points prevent non-occlusion intensity
edges from generating spurious occlusions in
the least cost solution. The only remaining issue is
how to reduce the occlusion cost along the edge-lines.
The fact that the GCPs prevent the system from generating
wildly implausible solution gives us additional
freedom in adjusting the cost.
8.1. Zero cost for occlusion at edges: degenerate
case
A simple method for lowering the occlusion cost along
edge-lines would be simply to reduce the occlusion
pixel cost if the pixel sits on either a vertical or diagonal
edge-line. Clearly, reducing the cost by any
amount will encourage proposing occlusions that coincide
with intensity edges. However, unless the cost
of occlusion along some line is free, there is a chance
that somewhere along the occlusion path a stray false,
but good, match will break the occlusion region. In
Figure
15, the proposed path will more closely hug the
dotted diagonal line, but still might wiggle between occlusion
state to match state depending upon the data.
More importantly, simply reducing the occlusion cost
in this manner re-introduces a sensitivity to the value
of that cost; the goal of the GCPs was the elimination
of that sensitivity.
If the dotted path in Figure 15 were free, however,
spurious good matches would not affect the recovered
occlusion region. An algorithm can be defined
in which any vertical or diagonal occlusion jump corresponding
to an edge-line has zero cost. This method
would certainly encourage occlusions to be proposed
along the lines.
Unfortunately, this method is a degenerate case. The
DP algorithm will find a solution that maximizes the
number of occlusion jumps through the DSI and minimizes
the number of matches, regardless of how good
the matches may be. Figure 16a illustrates how a zero
cost for both vertical and diagonal occlusion jumps
leads to nearly no matches begin proposed. Figure 16b
shows that this degenerate case does correspond to a
potentially real camera and object configuration. The
algorithm has proposed a feasible solution. The prob-
lem, however, is that the algorithm is ignoring huge
amounts of well-matched data by proposing occlusion
everywhere.
8.2. Focusing on occlusion regions
In the previous section we demonstrated that one cannot
allow the traversal of both diagonal and vertical
lines in the DSI to be cost free. Also, a compromise
of simply lowering the occlusion cost along both types
of edges re-introduces dependencies on that cost. Because
one of the goals of our approach is the recovery of
the occlusion regions, we choose to make the diagonal
Ground Control Point (GCP)
Line in lines DSI
path
Other possible paths
Fig. 15. This figure illustrates how reducing the cost along lines that appear in the lines DSI (represented here by dotted lines) can improve
occlusion recovery. Given the data between the two GCPs is noisy, the thin solid lines represent possible paths the algorithm might choose. If
the cost to propose an occlusion has been reduced, however, the emphasized path will most likely be chosen. That path will locate the occlusion
region cleanly with start and end points in the correct locations.
Proposed Match
Proposed Occlusion
Lines (from edges)
(a) (b)
Fig. 16. (a) When the occlusion cost along both vertical and diagonal edge-lines is set to zero, the recovered path will maximize the number
of proposed occlusions and minimize the number of matches. Although real solutions of this nature do exist, an example of which is shown in
(b), making both vertical and diagonal occlusion costs free generates these solutions even when enough matching data exists to support a more
likely result.
occlusion segments free, while the vertical segments
maintain the normal occlusion pixel cost. The expected
result is that the occlusion regions corresponding to the
diagonal gaps in the DSI should be nicely delineated
while the occlusion edges (the vertical jumps) are not
changed. Furthermore, we expect no increased sensitivity
to the occlusion cost. 9
Figure
17a shows a synthetic stereo pair from the
JISCT test set[ 9] of some trees and rocks in a field.
Figure
17b shows the occlusion regions recovered by
our algorithm when line information and GCP information
is not used, comparable to previous approaches
(e.g. [ 14]). The black occlusion regions around the
trees and rocks are usually found, but the boundaries
of the regions are not well defined and some major
errors exist. Figure 17c displays the results of using
only GCPs, with no edge information included. The
dramatic improvement again illustrates the power of
the GCP constraint. Figure 17d shows the result when
both GCPs and edges have been used. Though the improvement
over GCPs alone is not nearly as dramatic,
the solution is better. For example, the streaking at the
left edge of the crown of the rightmost tree has been
reduced. In general, the occlusion regions have been
recovered almost perfectly, with little or no streaking
or false matches within them. Although the overall
effect of using the edges is small, it is important in that
it biases the occlusion discontinuities to be proposed
in exactly the right place.
9. Conclusion
9.1.
Summary
We have presented a stereo algorithm that incorporates
the detection of occlusion regions directly into
the matching process, yet does not use smoothness or
intra- or inter-scanline consistency criteria. Employing
a dynamic programming solution that obeys the
occlusion and ordering constraints to find a best path
through the disparity space image, we eliminate sensitivity
to the occlusion cost by the use of ground con-
(a)
(c)
(b)
(d)
Fig. 17. (a) Synthetic trees left image, (b) occlusion result without GCPs or edge-lines, (c) occlusion result with GCPs only, and (d) result
with GCPs and edge-lines.
trol points (GCPs)- high confidence matches. These
points improve results, reduce complexity, and minimize
dependence on occlusion cost without arbitrarily
restricting the recovered solution. Finally, we extend
the technique to exploit the relationship between occlusion
jumps and intensity edges. Our method is to
reduce the cost of proposed occlusion edges that coincide
with intensity edges. The result is an algorithm
that extracts large occlusion regions accurately without
requiring external smoothness criteria.
9.2. Relation to psychophysics
As mentioned at the outset, there is considerable psychophysical
evidence that occlusion regions figure
somewhat prominently in the human perception of
depth from stereo (e.g. [ 27, 24]). And, it has become
common (e.g. [ 18]) to cite such evidence in
support of computational theories of stereo matching
that explicitly model occlusion.
However, for the approach we have presented here
we believe such reference would be a bit disingenu-
ous. Dynamic programming is a powerful tool for a
serial machine attacking a locally decided, global optimization
problem. But given the massively parallel
computations performed by the human vision system,
it seems unlikely that such an approach is particularly
relevant to understanding human capabilities.
However, we note that the two novel ideas of this
paper - the use of ground control points to drive the
stereo solution in the presence of occlusion, and the
integration of intensity edges into the recovery of occlusion
regions - are of interest to those considering
human vision.
One way of interpreting ground control points is as
unambiguous matches that drive the resulting solution
such that points whose matches are more ambiguous
will be correctly mapped. The algorithm presented
in this paper has been constructed so that relatively
few GCPs (one per surface plane) are needed to result
in an entirely unambiguous solution. This result
is consistent with the "pulling effect" reported in the
psychophysical literature (e.g. [ 21]) in which very few
unambiguous "bias" dots (as little as 2%) are needed to
pull an ambiguous stereogram to the depth plane of the
unambiguous points. Although several interpretations
of this effect are possible (e.g. see [ 1]) we simply
note that it is consistent with the idea of a few cleanly
matched points driving the solution.
Second, there has been recent work [ demonstrating
the importance of edges in the perception of
occlusion. Besides providing some wonderful demonstrations
of the impact of intensity edges in the perception
of occlusion, they also develop a receptive-field
theory of occlusion detection. Their receptive fields
require a vertical decorrelation edge where on one side
of the edge the images are correlated (matched), while
on the other they are not. Furthermore, they find evidence
that the strength the edge directly affects the
stability of the perception of occlusion. Though the
mechanism they propose is quite different than those
discussed here, this is the first strong evidence we have
seen supporting the importance of edges in the perception
of occlusion. Our interpretation is that the
human visual system is exploiting the occlusion edge
constraint developed here: occlusion edges usually fall
along intensity edges.
9.3. Open questions
Finally we mention a few open questions that should
be addressed if the work presented here is to be further
developed or applied. The first involves the recovery
of the GCPs. As indicated, having a well distributed
set of control points mostly eliminates the sensitivity
of the algorithm to the occlusion cost, and reduces the
computational complexity of the dense match. Our
initial experiments using a robust estimator similar to
have been successful, but we feel that a robust
estimator explicitly designed to provide GCPs could
be more effective.
Second, we are not satisfied with the awkward manner
in which lattice matching techniques - no sub-pixel
matches and every pixel is either matched or occluded
handle sloping regions. While a staircase of
matched and occluded pixels is to be expected (math-
ematically) whenever a surface is not parallel to the
image plane, its presence reflects the inability of the
lattice to match a region of one image to a differently-
sized region in the other. [ 7] suggests using super
resolution to achieve sub-pixel matches. While this
approach will allow for smoother changes in depth,
and should help with matching by reducing aliasing, it
does not really address the issue of non-constant dispar-
ity. As we suggested here, one could apply an iterative
warping technique as in [ 26], but the computational
cost may be excessive.
Finally, there is the problem of order constraint violations
as in some of the birch tree examples. Because
of the dynamic programming formulation we use, we
cannot incorporate these exceptions, except perhaps in
a post hoc analysis that notices that sharp occluding
surfaces actually match. Because our main emphasis
is on demonstrating the effectiveness of GCPs we have
not energetically explored this problem.
Acknowledgements
This work was supported in part by a grant from Interval
Research.
Notes
1. Typical set up is two CCD cameras, with 12mm focal length
lenses, separated by a baseline of about 30cm.
2. Belhumeur and Mumford [ 7] refer to these regions as "half-
occlusion" areas as they are occluded from only one eye. How-
ever, since regions occluded from both eyes don't appear in any
image, we find the distinction unnecessary here and use "oc-
cluded region" to refer to pixels visible in only one eye.
3. For example, given a 256 pixel-wide scan-line with a maximum
disparity shift of 45 pixels there are 3e+191 possible legal paths.
4. For a subtraction DSI, we are assigning a cost of the absolute
image intensity differences. Clearly squared values, or any other
probabilistically motivated error measure (e.g. [ 17, 18]) could
be substituted. Our experimentshave not shown great sensitivity
to the particular measure chosen.
5. There is a problem of semantics when considering sloping regions
in a lattice-based matching approach. As is apparent from
the state diagram in Figure 5 the only depth profile that can be
represented without occlusion is constant disparity. Therefore a
continuous surface which is not fronto-parallel with respect to
the camera will be represented by a staircase of constant disparity
regions interspersed with occlusion pixels, even though
there are no "occluded" pixels in the ordinary sense. In [ 18]
they refer to these occlusions as lattice-induced, and recommend
using sub-pixel resolution to finesse the problem. An alternative
would be to use an iterative warping technique as first proposed
in [ 26].
6. Actually, it only has to be greater than half the typical value of
the correct matches. This is becauseeach diagonalpixel jumping
forward must have a corresponding vertical jump back to end up
at the same GCP.
7. The exact value of co depends upon the range of intensities in
an image. For a 256 grey level image we use 12: The goal
of the GCPs is insensitivity to the exact value of this parameter.
In our experiments we can vary co by a factor of almost three
before seeing any variation in results.
8. In fact the birch tree example is a highly pathological case because
of the unbalanced dynamic range of the two images. For
example while 23% of the pixels in the left image have an intensity
value of 0 or 255, only 6% of the pixels in the right image
were similarly clipped. Such extreme clipping limited the ability
of the GCP finder to find unambiguousmatches in these regions.
9. The alternative choiceof makingthe vertical segments free might
be desired in the case of extensive limb edges. Assume the
system is viewing a sharply rounded surface (e.g. a telephone
pole) in front of some other surface, and consider the image
from the left eye. Interior to left edge of the pole as seen in
the left eye are some pole pixels that are not viewed by the
right eye. From a stereo matching perspective, these pixels are
identical to the other occlusion pixels visible in the left but not
right eyes. However, the edge is in the wrong place if focusing
on the occlusion regions, e.g. the diagonal disparity jumps in the
left image for the left side of the pole. In the right eye, the edge
is at the correct place and could be used to bias the occlusion
recovery. Using the right eye to establish the edges for a left
occlusion region (visible only in the left eye) and visa versa, is
accomplished by biasing the vertical lines in the DSI. Because
we do not have imagery with significant limb boundaries we
have not experimented with this choice of bias.
--R
Toward a general theory of stereopsis: Binocular matching
Depth from edge and intensity based stereo.
Realtime stereo and motion integration for navigation.
Computational stereo.
Bayesian models for reconstructing the scene geometry in a pair of stereo images.
A
A bayseian treatment of the stereo correspondence problem using half-occluded regions
Dynamic Programming.
The JISCT stereo eval- uation
A computational approach to edge detection.
On an analysis of static occlusion in stereo vision.
Use of monocular groupings and occlusion analysis in a hierarchical stereo system.
Amaximum likelihood stereo algorithm.
Structure from stereo - a review
Stero matching in the presence of narrow occluding objects using dynamic disparity search.
of Comp.
A system for digital stereo image matching.
Interaction between pools of binocular disparity detectors tuned to different disparities.
A stereo matching algorithm with an adaptive window: theory and experiment.
Direct evidence for occlusion in stereo and motion.
stereopsis: depth and subjective occluding contours from unpaired image points.
Stereo by intra- and inter-scanline search using dynamic programming
Hierachical warp stereo.
Real world occlusion constraints and binocular rivalry.
--TR
A computational approach to edge detection
Direct evidence for occlusion in stereo and motion
3-D Surface Description from Binocular Stereo
Disparity-space images and large occlusion stereo
Occlusions and binocular stereo
A maximum likelihood stereo algorithm
Computational Stereo
Stereo Matching in the Presence of Narrow Occluding Objects Using Dynamic Disparity Search
Occlusions and Binocular Stereo
Dynamic Programming
--CTR
Yuri Boykov , Vladimir Kolmogorov, An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.9, p.1124-1137, September 2004
Minglun Gong , Yee-Hong Yang, Fast Unambiguous Stereo Matching Using Reliability-Based Dynamic Programming, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.6, p.998-1003, June 2005
Minglun Gong , Ruigang Yang , Liang Wang , Mingwei Gong, A Performance Study on Different Cost Aggregation Approaches Used in Real-Time Stereo Matching, International Journal of Computer Vision, v.75 n.2, p.283-296, November 2007
Elisabetta Binaghi , Ignazio Gallo , Giuseppe Marino , Mario Raspanti, Neural adaptive stereo matching, Pattern Recognition Letters, v.25 n.15, p.1743-1758, November 2004
Antonio Criminisi , Sing Bing Kang , Rahul Swaminathan , Richard Szeliski , P. Anandan, Extracting layers and analyzing their specular properties using epipolar-plane-image analysis, Computer Vision and Image Understanding, v.97 n.1, p.51-85, January 2005
Maxime Lhuillier , Long Quan, A Quasi-Dense Approach to Surface Reconstruction from Uncalibrated Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.418-433, March 2005
Abhijit S. Ogale , Yiannis Aloimonos, Shape and the Stereo Correspondence Problem, International Journal of Computer Vision, v.65 n.3, p.147-162, December 2005
Richard Szeliski , Daniel Scharstein, Sampling the Disparity Space Image, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.3, p.419-425, March 2004
Minglun Gong , Yee-Hong Yang, Estimate Large Motions Using the Reliability-Based Motion Estimation Algorithm, International Journal of Computer Vision, v.68 n.3, p.319-330, July 2006
Jian Sun , Nan-Ning Zheng , Heung-Yeung Shum, Stereo Matching Using Belief Propagation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.7, p.787-800, July
Changming Sun , Mark Berman , David Coward , Brian Osborne, Thickness measurement and crease detection of wheat grains using stereo vision, Pattern Recognition Letters, v.28 n.12, p.1501-1508, September, 2007
Abhijit S. Ogale , Yiannis Aloimonos, A Roadmap to the Integration of Early Visual Modules, International Journal of Computer Vision, v.72 n.1, p.9-25, April 2007
Sing Bing Kang , Richard Szeliski, Extracting View-Dependent Depth Maps from a Collection of Images, International Journal of Computer Vision, v.58 n.2, p.139-163, July 2004
Changming Sun, Fast Stereo Matching Using Rectangular Subregioning and 3D Maximum-Surface Techniques, International Journal of Computer Vision, v.47 n.1-3, p.99-117, April-June 2002
Daniel Scharstein , Richard Szeliski, A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, International Journal of Computer Vision, v.47 n.1-3, p.7-42, April-June 2002
Amy J. Briggs , Carrick Detweiler , Yunpeng Li , Peter C. Mullen , Daniel Scharstein, Matching scale-space features in 1D panoramas, Computer Vision and Image Understanding, v.103 n.3, p.184-195, September 2006
L.-Q. Xu , B. Lei , E. Hendriks, Computer Vision for a 3-D Visualisation and Telepresence Collaborative Working Environment, BT Technology Journal, v.20 n.1, p.64-74, January 2002
Edgar Arce , J. L. Marroquin, High-precision stereo disparity estimation using HMMF models, Image and Vision Computing, v.25 n.5, p.623-636, May, 2007
B. J. Lei , E. A. Hendriks, Real-time multi-step view reconstruction for a virtual teleconference system, EURASIP Journal on Applied Signal Processing, v.2002 n.1, p.1067-1087, January 2002 | occlusion;dynamic-programming stereo;stereo;disparity-space |
335436 | Efficient and cost-effective techniques for browsing and indexing large video databases. | We present in this paper a fully automatic content-based approach to organizing and indexing video data. Our methodology involves three steps: Step 1: We segment each video into shots using a Camera-Tracking technique. This process also extracts the feature vector for each shot, which consists of two statistical variances VarBA and VarOA. These values capture how much things are changing in the background and foreground areas of the video shot. Step 2: For each video, We apply a fully automatic method to build a browsing hierarchy using the shots identified in Step 1. Step 3: Using the VarBA and VarOA values obtained in Step 1, we build an index table to support a variance-based video similarity model. That is, video scenes/shots are retrieved based on given values of VarBA and VarOA. The above three inter-related techniques offer an integrated framework for modeling, browsing, and searching large video databases. Our experimental results indicate that they have many advantages over existing methods. | Introduction
With the rapid advances in data compression and
networking technology, video has become an inseparable
part of many important applications such as digital
libraries, distance learning, public information systems,
electronic commerce, movies on demand, just to name
a few. The proliferation of video data has led to a
This research is partially supported by the National Science
Foundation grant ANI-9714591.
significant body of research on techniques for video
database management systems (VDBMSs) [1]. In
general, organizing and managing video data is much
more complex than managing text and numbers due to
the enormous size of video files and their semantically
rich contents. In particular, content-based browsing
and content-based indexing techniques are essential. It
should be possible for users to browse video materials in
a non-sequential manner and to retrieve relevant video
data efficiently based on their contents.
In a conventional (i.e., relational) database management
system, the tuple is the basic structural element
for retrieval, as well as for data entry. This is not the
case for VDBMSs. For most video applications, video
clips are convenient units for data entry. However, since
an entire video stream is too coarse as a level of ab-
straction, it is generally more beneficial to store video
as a sequence of shots to facilitate information retrieval.
This requirement calls for techniques to segment videos
into shots which are defined as a collection of frames
recorded from a single camera operation. This process
is referred to as shot boundary detection (SBD).
Existing SBD techniques require many input parameters
which are hard to determine but have a significant
influence on the quality of the result. A recent
study [2] found that techniques using color histograms
[3, 4, 5, 6] need at least three threshold values, and their
accuracy varies from 20% to 80% depending on those
values. At least six different threshold values are necessary
for another technique using edge change ratio [7].
Again, these values must be chosen properly to get satisfactory
results [2]. In general, picking the right values
for these thresholds is a difficult task because they vary
greatly from video to video. These observations indicate
that today's automatic SBD techniques need to be
more reliable before they can be used in practice. From
the perspective of an end user, a DBMS is only as good
as the data it manages. A bad video shot, returned as
a query result, would contain incomplete and/or extra
irrelevant information. This is a problem facing today's
VDBMSs. To address this issue, we propose to detect
shot boundaries in a more direct way by tracking the
camera motion through the background areas in the
video. We will discuss this idea in more detail later.
A major role of a DBMS is to allow the user to
deal with data in abstract terms, rather than the
form in which a computer stores data. Although shot
serves well as the basic unit for video abstraction, it
has been recognized in many applications that scene
is sometimes a better unit to convey the semantic
meaning of the video to the viewers. To support
this fact, several techniques have been proposed to
merge semantically related and temporally adjacent
shots into a scene [8, 9, 10, 11]. Similarly, it is
also highly desirable to have a complete hierarchy of
video content to allow the user to browse and retrieve
video information at various semantic levels. Such a
multi-layer abstraction makes it more convenient to
reference video information and easier to comprehend
its content. It also simplifies video indexing and storage
organization. One such technique was presented in [12].
This scheme abstracts the video stream structure in a
compound unit, sequence, scene, shot hierarchy. The
authors define a scene as a set of shots that are related
in time and space. Scenes that together give meaning
are grouped into a sequence. Related sequences are
assembled into a compound unit of arbitrary level.
Other multilevel structures were considered in [13,
14, 15, 16, 17]. All these studies, however, focus
on modeling issues. They attempt to design the
best hierarchical structure for video representation.
However, they do not provide techniques to automate
the construction of these structures.
Addressing the above limitation is essential to handling
large video databases. One attempt was presented
in [18]. This scheme divides a video stream into multiple
segments, each containing an equal number of consecutive
shots. Each segment is then further divided into
sub-segments. This process is repeated several times
to construct a hierarchy of video content. A drawback
of this approach is that only time is considered; and
no visual content is used in constructing the browsing
hierarchy. In contrast, video content was considered
in [19, 20, 21]. These methods first construct a priori
model of a particular application or domain. Such
a model specifies the scene boundary characteristics,
based on which the video stream can be abstracted
into a structured representation. The theoretical frame-work
of this approach is proposed in [19], and has been
successfully implemented for applications such as news
videos [20] and TV soccer programs [21]. A disadvantage
of these techniques is that they rely on explicit
models. In a sense, they are application models, rather
than database models. Two techniques, that do not employ
models, are presented in [11, 22]. These schemes,
however, focus on low-level scene construction. For
instance, given that shots, groups and scenes are the
structural units of a video, a 4-level video-scene-group-
shot hierarchy is used for all videos in [22].
In this paper, we do not fix the height of our browsing
hierarchy, called scene tree, in order to support a variety
of videos. The shape and size of a scene tree are
determined only by the semantic complexity of the
video. Our scheme is based on the content of the video.
Our experiments indicate that the proposed method can
produce very high quality browsing structures.
To make browsing more efficient, we also introduce
in this paper a variance-based video similarity model.
Using this model, we build a content-based indexing
mechanism to serve as an assistant to advise users
on where in the appropriate scene trees to start the
browsing. In this environment, each video shot is
characterized as follows. We compute the average colors
of the foreground and background areas of the frames in
the shot, and calculate their statistical variance values.
These values capture how much things are changing
in the video shot. Such information can be used to
build an index. To search for video data, a user can
write a query to describe the impression of the degree of
changes in the primary video segment. Our experiments
indicate that this simple query model is very effective
in supporting browsing environment. We will discuss
this technique in more detail.
In summary, we present in this paper a fully automatic
content-based technique for organizing and indexing
video data. Our contributions are as follows:
1. We address the reliability problem facing today's
video data segmentation techniques by introducing
a camera-tracking method.
2. We fully automate the construction of browsing
hierarchies. Our method is general purpose, and
is suitable for all videos.
3. We provide a content-based indexing mechanism to
make browsing more efficient.
The above three techniques are inter-related. They offer
an integrated framework for modeling, browsing, and
searching large video databases.
The remainder of this paper is organized as follows.
We present our SBD technique [23], and discuss
the extensions required to support our browsing and
indexing mechanisms in Section 2. The procedure for
building scene trees is described in details in Section
3. In Section 4, we discuss the content-based indexing
technique for video browsing. The experimental results
are examined in Section 5. Finally, we give our
concluding remarks in Section 6.
Tracking Technique for
SBD and Its Extension
To make the paper self-contained, we first describe our
SBD technique [23]. We then extend it to include
new features required by our browsing and indexing
techniques.
2.1 A Camera Tracking Approach to Shot
Boundary Detection
r
c
(a)
Motion
(b)
I i
Figure
1: Background Area
Since a shot is made from one camera operation,
tracking the camera motion is the most direct way
to identify shot boundaries. This can be achieved by
tracking the background areas in the video frames as
follows. We define a fixed background area
all frames as illustrated by the lightly shaded areas in
Figure
1(a). The rationale for the u shape of the FBA
is as follows:
ffl The bottom part of a frame is usually part of some
object(s).
ffl The top bar cover any horizontal camera motion.
ffl The two columns cover any vertical camera motion.
ffl The combination of the top bar and the left column
can track any camera motion in one diagonal
direction. The other diagonal direction is covered by
the combination of the top bar and the right column.
These two properties are illustrated in Figure 1(b).
The above properties suggest that we can detect a
shot boundary by determining if two consecutive frames
share any part of their FBAs. This requires comparing
each part of one FBA against every part of the other
FBA. To make this comparison more efficient, we
rotate the two vertical columns of each u shape FBA
outward to form a transformed background area (TBA)
as illustrated in Figure 2. From each TBA, which
is a two-dimensional array of pixels, we compute its
signature and sign by applying a modified version of the
image reduction technique, called Gaussian Pyramid
[24]. The idea of 'Gaussian Pyramid' was originally
introduced for reducing an image to a smaller size.
We use this technique to reduce a two-dimensional
TBA into a single line of pixels (called signature) and
eventually a single pixel (called sign). The complexity
of this procedure is O(2 log(m+1) ), which is actually
O(m), where m is the number of pixels involved. The
interested reader is referred to [23] for the details. We
illustrate this procedure in Figure 3. It shows a 13 \Theta 5
TBA being reduced in multiple steps. First, the five
pixels in each column are reduced to one pixel to give
one line of 13 pixels, which is used as the signature.
This signature is further reduced to the sign denoted
by sign BA
. The superscript and subscript indicate that
this is the sign of the background area of some frame i.
We note that this rather small TBA is only illustrative.
We will discuss how to determine the TBA shortly.
TBA
I
Figure
2: Shape Transformation of FBA
Signature
Sign
TBA
Figure
3: Computation of Signature and Sign
We use the signs and signatures to detect shot
boundaries as illustrated in Figure 4. The first
two stages are quick-and-dirty tests used to quickly
eliminate the easy cases. Only when these two tests
fail, we need to track the background in Stage 3 by
shifting the two signatures, of the two frames under
test, toward each other one pixel at a time. For each
shift, we compare the overlapping pixels to determine
the longest run of matching pixels. A running maximum
is maintained for these matching scores. In the end, this
maximum value indicates how much the two images
share the common background. If the score is larger
than a certain threshold, the two video frames are
determined to be in the same shot.
Sign
Matching Sign Sign i+1
Pixel
Matching
Background
Tracking
Signature
Signature
Cut Not a cut
Stage (1)
Stage (2)
Stage (3)
Figure
4: Shot Boundary Detection Procedure
2.2 Extension to the Camera Tracking
Technique
We define the fixed object area (FOA) as the foreground
area of a video frame, where most primary objects
appear. This area is illustrated in Figure 1 as the
darkly shaded region of a video frame. To facilitate our
indexing scheme, we need to reduce the FOA of each
frame i to one pixel. That is, we want to compute its
sign, sign OA
, where the superscript indicates that this
sign is for an FOA. This parameter can be obtained
using the Gaussian Pyramid as in sign BA
i . This
computation requires the dimensions of the FOA. Given
r and c as the dimensions of the video frame (see
Figure
1), we discuss the procedure for determining the
dimensions of TBA and FOA as follows.
Let the dimensions of FOA be h and b, and those
of TBA be w and L as illustrated in Figure 1. We
first estimate these parameters as h 0 , b 0 , w 0 , and L 0 ,
respectively. We choose w 0 to be 10% of the width of the
video frame, i.e., w
\Xi c\Pi . This value was determined
empirically using our video clips. They show that this
value of w 0 results in TBAs and FOAs which cover
the background and foreground areas, respectively, very
well. Using these w 0 , we can compute the other
estimates as follows: b
In order to apply the Gaussian Pyramid technique,
the dimensions of TBA and FOA must be in the size
set f1, 5, 13, 29, 61, 125, .g. This is due to the fact
that this technique reduces five pixels to one pixel, 13
pixels to five, 29 pixels to 13, and so on. In general, the
jth element (s j ) in this size set is computed as follows:
Using this size set, the proper value for w is the value
in the size set, which is nearest to w 0 . This nearest
number can be determined as follows. We first compute
log
. Substituting this value of j into
Equation (1) gives us the desired value for w. Similarly,
we can compute L; h, and b. This approximation
scheme is illustrated in Table 1. As an example, let
16. The corresponding
j value is 3. Substituting j into Equation (1) gives us
13 as the proper value for w.
h', b', w' or L' Nearest value
21, 22, ., 44
45, 46, ., 9229h, b, w or L529
Table
1: Approximate the dimensions using the nearest
value from the size set.
In this section, we have described the computation of
the two sign values sign BA
i and sign OA
the procedure to determine the video shots. In the
next two sections, we will discuss how these shots and
signs are used to build browsing hierarchies and index
structures for video databases.
Building Scene Trees for
Non-linear Browsing
Video data are often accessed in an exploring or browsing
mode. Browsing a video using VCR like functions
(i.e., fast-forward or fast-reverse) [25], however, is tedious
and time consuming. A hierarchical abstraction
allowing nonlinear browsing is desirable. Today's techniques
for automatic construction of such structures,
however, have many limitations. They rely on explicit
models, focus only on the construction of low-level
scenes, or ignore the content of the video. We discuss
in this section our Scene Tree approach which addresses
all these drawbacks.
In order to automate the tree construction process,
we base our approach on the visual content of the
video instead of human perception. First, we obtain
the video shots using our camera-tracking SBD method
discussed in the last section. We then group adjacent
shots that are related (i.e., sharing similar backgrounds)
into a scene. Similarly, scenes with related shots are
considered related and can be assembled into a higher-level
scene of arbitrary level. We discuss the details
of this strategy and give an example in the following
subsections.
3.1 Scene Tree Construction Algorithm
Let A and B be two shots with jAj and jBj frames,
respectively. The algorithm to determine if they are
related is as follows.
1.
2. Compute the difference D s of Sign BA
i of shot A and
Sign BA
j of shot B using the following equation. We
use the number 256 since in our RGB space red,
green and blue colors range from 0 to 255
difference in Sign BA s'
\Theta 100(%)
(2)
3. If D s is less than 10%, then stop and return that
the two shots are related; otherwise, go to the next
step.
4. Set i
ffl If i ? jAj, then stop and return that two shots are
not related; otherwise, set j
5. Go to Step (2).
For convenience, We will refer to this algorithm as
RELATIONSHIP. It can be used in the following
procedure to construct a browsing hierarchy, called
scene tree, as follows.
1. A scene node SN 0
i in the lowest level (i.e., level 0) of
scene tree is created for each shot#i. The subscript
indicates the shot (or scene) from which the scene
node is derived; and the superscript denotes the level
of the scene node in the scene tree.
2. Set i / 3.
3. Apply algorithm RELATIONSHIP to compare shot#i
with each of the shots shot#(i-2), \Delta \Delta \Delta, shot#1 (in descending
order). This sequence of comparisons stops
when a related shot, say shot#j, is identified. If no
related shot is found, we create a new empty node,
connect it as a parent node to SN 0
proceed to
Step 5.
4. We consider SN
. Three scenarios can
happen:
ffl If SN 0
j do not currently have a parent
node, we connect all scene nodes, SN 0
j , to a new empty node as their parent node.
ffl If SN 0
share an ancestor node, we
connect SN 0
i to this ancestor node.
ffl If SN 0
j do not currently share an
ancestor node, we connect SN 0
i to the current
oldest ancestor of SN 0
, and then connect the
current oldest ancestors of SN 0
j to a
new empty node as their parent node.
5. If there are more shots, we set i
go to step 3. Otherwise, we connect all the nodes
currently without a parent to a new empty node as
their parent.
6. For each scene node at the bottom of the scene
tree, we select from the corresponding shot the
most "repetitive" frame as its representative frame,
i.e., this frame shares the same sign with the most
number of frames in the shot. We then traverse all
the nodes in the scene tree, level by level, starting
from the bottom. For each empty node visited, we
identify the child node, say SN c
m , which contains
shot#m which has the longest sequence of frames
with the same Sign BA value. We rename this empty
node as SN c+1
m , and assign the representative frame
of SN c
m to SN c+1
.
We note that each scene node contains a representative
frame or a pointer to that frame for future use such
as browsing or navigating. The criterion for selecting
a representative frame from a shot is to find the most
frequent image. If more than one such image is found,
we can choose the temporally earliest one. As an ex-
ample, let us assume that shot#5 has 20 frames and
the Sign BA value of each frame is as shown in Table 2.
Since Sign BA is actually a pixel, it has three numerical
values for the three colors, red, green and blue. In
this case, we use frame 1 as the representative frame
for shot#5 because this frame corresponds to an image
with the longest sequence of frames with the same
Sign BA values (i.e., 219, 152, 142). Although, the sequence
corresponding to frames 15 to 20 also has the
same sequence length, frame 15 is not selected because
it appears later in the shot. Instead of having only one
representative frame per scene, we can also use g(s)
most repetitive representative frames for scenes with s
shots to better convey their larger content, where g is
some function of s.
Frames Sign
Red Green Blue
No. 3 219 152 142
No. 4 219 152 142
No. 5 219 152 142
No. 6 219 152 142
No. 7 226 164 172
No. 8 226 164 172
No. 9 213 149 134
Table
2: Frames in the shot#5
us evaluate the complexity of the two
algorithms above. The complexity of RELATIONSHIP
is O(jAj \Theta jBj). The average computation cost, however,
is much less because the algorithm stops as soon as it
finds the two related scenes. Furthermore, the similarity
computation is based on only one pixel (i.e., Sign BA ) of
each video frame making this algorithm very efficient.
The cost of the tree construction algorithm can be
derived as follows. Step 3 can be done in O(f 2 \Theta n),
where f is the number of frames, and n is the number
of shots in a given video. This is because the algorithm
visits every shot; and whenever a shot is visited, it is
compared with every frame in the shots before it. In
Step 4 and Step 6, we need to traverse a tree. It can be
done in O(log(n)). Therefore, the whole algorithm can
be completed in O(f 2 \Theta n).
3.2 Example to explain Scene Tree
shot 1
A
Find Relation
B, B1 related
related
related
D, D1,D2 related
(a)
(b)
Figure
5: A video clip with ten shots
The scene tree construction algorithm is best illustrated
by an example. Let us consider a video clip with
ten scenes as shown in Figure 5. For convenience, we
label related shots with the same prefix. For instance,
shot#1, shot#3 and shot#6 are related, and are labeled
as A; A1 and A2, respectively. An effective algorithm
should group these shots into a longer unit at a higher
level in the browsing hierarchy. Using this video clip,
we illustrate our tree construction algorithm in Figure 6.
The details are discussed below.
Shot A
(a) (b) (c)
(d)
SN 8Shot A Shot A
Shot A
Shot A
Shot A
Shot A
Figure
Building
Figure
We first create three scene nodes
2 and SN 0
3 for shot#1, shot#2 and shot#3,
respectively. Applying algorithm RELATIONSHIP
to shot#3 and shot#1, we determine that the two
shots are related. Since they are related but neither
currently has a parent node, we connect them to
a new empty node called EN1. According to
our algorithm, we do not need to compare shot#2
and shot#3. However, shot#2 is connected to
because shot#2 is between two related nodes,
shot#3 and shot#1.
Figure
Applying the algorithm RELATIONSHIP
to shot#4 and shot#2, we determine that they
are related. This allows us to skip the comparison
between shot#4 and shot#1. In this case, since SN 0and SN 0
3 share the same ancestor (i.e., EN1), we
also connect shot#4 to EN1.
Figure
Comparing shot#5 with shot#3,
shot#2, and shot#1 using RELATIONSHIP, we
determine that shot#5 is not related to these three
shots. We, thus create SN 0
5 for shot#5, and connect
it to a new empty node EN2.
Figure
6(d): In this case, shot#6 is determined to
be related to shot#3. Since SN 0
5 and SN 0
currently
do not have the same ancestor, we first connect SN 0to EN2; and then connect EN1 and EN2 to a new
empty node EN3 as their parent node.
Figure
6(e): In this case, shot#7 is determined to
be related to shot#5. Since SN 0
7 and SN 0
5 share the
same ancestor node EN2, we simply create SN 0
7 for
shot#7 and connect this scene node to EN2.
Figure
This case is similar to the case of
Figure
6(c). shot#8 is not related to any previous
shots. We create a new scene node SN 0
8 for shot#8,
and connect this scene node to a new empty node
EN4.
Figure
shot#9 and shot#10 are found to
be related to the immediate previous node, shot#8
and shot#9, respectively. In this case, according
to the algorithm, both shot#9 and shot#10 are
connected to EN4. Since shot#10 is the last shot
of the video clip, we create a root node, and connect
all nodes which do not currently have a parent
node to this root node. Now, we need to name
all the empty nodes. EN1 is named SN 1
shot#1 contains an image which is "repeated" most
frequently among all the images in the first four
level-0 scenes. The superscript of "1" indicates that
1 is a scene node at level 1. As another example,
EN3 is named SN 2
because shot#1 contains an
image which is "repeated" most frequently among
all the images in the first seven level-0 scenes. The
superscript of "2" indicates that SN 2
1 is a scene
node at level 2. Similarly, we can determine the
names for the other scene nodes. We note that the
naming process is important because it determines
the proper representative frame for each scene node,
e.g., SN 1
7 indicates that this scene node should use
the representative frame from shot#7.
In Section 5, we will show an example of a scene tree
built from a real video clip.
4 Cost-effective Indexing
In this section, we first discuss how Sign BA and
Sign OA , generated from our SBD technique, can be
used to characterize video data. We then present a
video similarity model based on these two parameters.
4.1 A Simple Feature Vector for Video Data
To illustrate the concept of our techniques, we use the
same example video clip in Figure 5, which has 10
shots. From this video clip, let us assume that our
SBD technique generates the values of Sign BA s and
Sign OA s for all the frames as shown in the 4th and
5th columns of Table 3, respectively. The 6th and
Shots
No. of start
frame Sign BA100170415495550
No. of end
frame Sign OA Var BA Var OA76141351416496
OA ., Sign 75
OA Var A
BA Var A
OA
BA ., Sign 100
OA ., Sign 100
OA
Sign 101
BA ., Sign 140
BA Sign 101
OA ., Sign 140
OA
BA ., Sign 170
BA Sign 141
OA ., Sign 170
OA
BA ., Sign 290
BA Sign 171
OA ., Sign 290
OA
Sign 191
BA ., Sign 350
BA Sign 191
OA ., Sign 350
OA
BA ., Sign 415
BA Sign 351
OA ., Sign 415
OA
Sign 416
BA ., Sign 495
BA Sign 416
OA ., Sign 495
OA
Sign 496
BA ., Sign 550
BA Sign 496
BA ., Sign 550
BA
BA ., Sign 625
BA Sign 551
OA ., Sign 625
OA
OA
OA
BA Var B1
OA
OA
BA Var A2
OA
BA Var C1
OA
BA Var D
OA
OA
OA
Table
3: Results from Shot Boundary detection
7th columns of Table 3, which are called ar BA and
ar OA , respectively, are computed using the following
equations:
ar BA
where k and l are the first and last frames of the ith
shot, respectively. Sign BA
i is the mean value for all the
signs, and is computed as follows:
Sign BA
Similarly, we can compute V ar OA
i as follows:
ar OA
Sign OA
We note that V ar BA and V ar OA are the statistical
variances of Sign BA s and Sign OA s, respectively, within
a shot. These variance values measure the degree of
changes in the content of the background or object area
of a shot. They have the following properties:
ar BA is zero, it obviously means that there
is no change in Sign BA s. In other words, the
background is fixed in this shot.
ar OA is zero, it means that there is no change
in Sign OA s. In other words, there is no change in
the object area.
ffl If either value is not zero, there are changes in
the background or object area. A larger variance
indicates a higher degree of changes in the respective
area.
Thus, V ar BA and V ar OA capture the spatio-temporal
semantics of the video shot. We can use them to
characterize a video shot, much like average color, color
distribution, etc. are used to characterize images.
Based on the above discussions, we may be asked if
just two values, V ar BA and V ar OA , are enough to capture
the various contents of diverse kinds of videos. To
answer this concern, we note that videos in a digital
library are typically classified by their genre and form.
133 genres and 35 forms are listed in [26]. These genres
include 'adaptation', 'adventure', 'biographical', 'com-
ern', etc. Some examples of the 35 forms are 'anima-
series'. To classify a video, all appropriate genres and
forms are selected from this list. For examples, the
movie 'Brave Heart' is classified as 'adventure and biographical
feature'; and 'Dr. Zhivago' is classified as
'adaptation, historical, and romance feature'. In total,
there are at least 4,655 (133 \Theta 35) possible categories of
videos. If we assume that video retrieval is performed
within one of these 4,655 classes, our indexing scheme
using V ar BA and V ar OA should be enough to characterize
contents of a shot. We will show experimental
results in the next section to substantiate this claim.
Unlike methods which extract keywords or key-
frame(s) from videos, our method extracts (V ar BA and
ar OA ) for indexing and retrieval. The advantage of
this approach is that it can be fully automated. Fur-
thermore, it is not reliance on any domain knowledge.
4.2 A Video Similarity Model
To facilitate video retrieval, we build an index table
as shown in Table 4. It shows the index information
relevant to two video clips, 'Simon Birch' and 'Wag the
Dog.' For convenience, we denote the last column as
. That is D
ar
ar OA .
A
F
6 117 153 34.23 17.81 16.42
9 200 205 13.10 13.97 -0.88
7 90 96 2.81 35.07 -32.26
9 104 116 1.88 17.23 -15.35
(a) Simon Birch (b) Wag the Dog
To search for relevant shots, the user expresses the
impression of how much things are changing in the
background and object areas by specifying the V ar BA
and V ar OA
q values, respectively. In response, the
system computes D v
ar BA
ar OA q , and
return the ID of any shot i that satisfies the following
conditions:
(D v
ar BA
ar BA
ar BA q
Since the impression expressed in a query is very
approximate, ff and fi are used in the similarity
computation to allow some degree of tolerance in
matching video data. In our system, we set
1:0. We note that another common way to handle inexact
queries is to do matching on quantized data.
In general, the answer to a query does not have to be
shots. Instead, the system can return the largest scenes
that share the same representative frame with one of
the matching shots. Using this information, the user
can browse the appropriate scene trees, starting from
the suggested scene nodes, to search for more specific
scenes in the lower levels of the hierarchies. In a sense,
this indexing mechanism makes browsing more efficient.
5 Experimental Results
Our experiments were designed to assess the following
performance issues:
ffl Our camera tracking technique is effective for SBD.
ffl The algorithm, presented in Section 3, builds reliable
scene trees.
ffl The variance values V ar BA and V ar OA make a good
feature vector for video data.
We discuss our performance results in the following
subsections.
5.1 Performance of Shot Boundary
Detection Technique
Two parameters 'recall' and 'precision' are commonly
used to evaluate the effectiveness of IR (Information
Retrieval) techniques [27]. We also use these metrics in
our study as follows:
ffl Recall is the ratio of the number of shot changes
detected correctly over the actual number of shot
changes in a given video clip.
ffl Precision is the ratio of the number of shot changes
detected correctly over the total number of shot
changes detected (correctly or incorrectly).
In a previous study [23], we have demonstrated
that our Camera Tracking technique is significantly
more accurate then traditional methods based on color
histograms and edge change ratios. In the current
study, we re-evaluate our technique using many more
video clips. Our video clips were originally digitized
in AVI format at frames/second. Their resolution
Type
News
Commercials
Name Duration
(min:sec)
Shot
Changes
Scooby Dog Show (Cartoon)
Friends (Sitcom)
Movies
Chicago Hope (Drama)
Sports
Events
Star Trek(Deep Space Nine)
Programs
Silk Stalkings (Drama)
Documentaries
Music Videos
All My Children (Soap Opera)
Kobe Bryant
Flinstone (Cartoon)
Jerry Springer (Talk Show)
National (NBC)
Brave Heart
ATF
Simon Birch
Tennis (1999 U.S. Open)
Mountain Bike Race
Football
Today's Vietnam
For all mankind
Alabama Song
Wag the dog
Recall
on (H p )0.870.960.890.900.850.750.810.840.89
0.94 0.90
0.95 0.93
0.94 0.91
0.95 0.93
0.91 0.90
Table
5: Test Video Clips and Detection Results for
Shot Changes
is 160 \Theta 120 pixels. To reduce computation time, we
made our test video clips by extracting frames from
these originals at the rate of 3 frames/second. To
design our test video set, we studied the videos used
in [28, 7, 9, 10, 29, 30, 2]. From theirs, we created
our set of 22 video clips. They represent six different
categories as shown in Table 5. In total, this test set
lasts about 4 hours and 30 minutes. It is more complete
than any other test sets used in [28, 7, 9, 10, 29, 30, 2].
The details of our test video set and shot boundary
detection results are given in Table 5. We observe that
the recalls and the precisions are consistent with those
obtained in our previous study [23].
5.2 Effectiveness of Scene Tree
In this study, we run the algorithms in Section 3 to
build the scene tree for various videos. To assess the
effectiveness of these algorithms, we inspected each
video and evaluated the structure of the corresponding
tree and its representative frames. Since it is difficult
to quantify the quality of these scene trees, we show
one representative tree in Figure 7. This scene tree was
built from a one-minute segment of our test video clip
"Friends." The story is as follows. Two women and one
man are having a conversation in a restaurant, and two
men come and join them. If we travel the scene tree
from level 3 to level 1, and therefore browsing the video
non-linearly, we can get the above story. We note that
the representative frames serve well as a summary of
important events in the underlying video.
5.3 Effectiveness of V ar BA and V ar QA
To demonstrate that V ar BA and V ar QA indeed capture
the semantics of video data, we select arbitrary shots
from our data set. For each of these shots, we compute
its V ar BA and V ar QA , and use them to retrieve similar
shots in the data set. If these two parameters are indeed
good feature values, the shots returned should resemble
some characteristics of the shot used to do the retrieval.
We show some of the experimental results in Figure 8,
Figure
9 and Figure 10. In each of these figures, the
upper, leftmost picture is the representative frame of
the video short selected arbitrarily for the retrieval
experiment. The remaining pictures are representative
frames of the matching shots. The label under each
picture indicates the shot and the video clip the
representative frame belongs to. For instance, #12W
represents the representative frame of the 12th shot of
'Wag the dog'. Due to space limitation, we show only
the three most similar shots in each case. They are
discussed below.
Figure
8 The shot (#12W) is from 'Wag the dog'.
This shot is a close-up of a person who is talking.
The D v
ar BA
12 for this shot are 5.86 and 17.37,
respectively, as seen in Table 4(b). The shot #102
from 'Wag the dog', and the shots #64 and #154
from 'Simon Birch' were retrieved and presented in
Figure
8. The results are quite impressive in that all
four shots show a close-up view of a talking person.
Figure
9 The shot (#33W) is from 'Wag the dog',
and the content shows two people talking from some
distance. The D v
ar BA
33 for this shot are 1.46
and 9.37, respectively, as seen in Table 4(b). The
shot #11 from 'Wag the dog', and the shots #93,
and #108 from 'Simon Birch' were retrieved and
presented in Figure 9. Again, the four shots are
very similar in content. All show two people talking
from some distance.
Figure
10 The shot (#76S) is from 'Simon Birch.'
The content is a person running from the kitchen to
the window. The D v
76 and V ar BA
76 for this shot are
-0.78 and 23.55, respectively, as seen in Table 4(a).
The shot #87 from 'Wag the dog', and the shots
#1 and #4 from 'Simon Birch' were retrieved and
presented in Figure 9. Two people are riding a bike
in shot #1S. In shot #4W, one person is running
in the woods. In shot #87, one person is picking
a book from a book shelf and walking to the living
room. These shots are similar in that all show a
single moving object with a changing background.
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
##
Figure
7: Scene Tree of 'Friends'
Figure
8: Shots with similar index values - Set 1.
Figure
9: Shots with similar index values - Set 2.
Figure
10: Shots with similar index values - Set 3.
6 Concluding Remarks
We have presented in this paper a fully automatic
content-based approach to organizing and indexing
video data. There are three steps in our methodology:
Camera-Tracking Shot Boundary Detection
technique is used to segment each video into
basic units called shots. This step also computes
the feature vector for each shot, which consists of
two variances V ar BA and V ar OA . These two values
capture how much things are changing in the
background and foreground areas of the shot.
ffl Step 2: For each video, a fully automatic method is
applied to the shots, identified in Step 1, to build a
browsing hierarchy, called Scene Tree.
ffl Step 3: Using the V ar BA and V ar OA values obtained
in Step 1, an index table is built to support
a variance-based video similarity model. That is,
video scenes/shots are retrieved based on given values
of V ar BA and V ar OA .
Actually, the variance-based similarity model is not
used to directly retrieve the video scenes/shots. Rather,
it is used to determine the relevant scene nodes. With
this information, the user can start the browsing from
these nodes to look for more specific scenes/shots in the
lower level of the hierarchy.
Comparing the proposed techniques with existing
methods, we can draw the following conclusions:
ffl Our Camera-Tracking technique is fundamentally
different from traditional methods based on pixel
comparison. Since our scheme is designed around
the very definition of shots, it offers unprecedented
accuracy.
ffl Unlike existing schemes for building browsing hier-
archies, which are limited to low-level entities (i.e.,
scenes), rely on explicit models, or do not consider
the video content, our technique builds a scene tree
automatically from the visual content of the video.
The size and shape of our browsing structure reflect
the semantic complexity of the video clip.
ffl Video retrieval techniques based on keywords are ex-
pensive, usually application dependent, and biased.
These problems remain even if the dialog can be
extracted from the video using speech recognition
methods [31]. Indexing techniques based on spatio-temporal
contents are available. They, however, rely
on complex image processing techniques, and therefore
very expensive. Our variance-based similarity
model offers a simple and inexpensive approach to
achieve comparable performance. It is uniquely suitable
for large video databases.
We are currently investigating extensions to our
variance-based similarity model to make the comparison
more discriminating. We are also studying techniques
to speed up the video data segmentation process.
--R
Video Database Systems - Issues
Comparison of automatic shot boundary detection algorithms.
Automating the creation of a digital vidoe library.
The moca workbench: Support for creativity in movie content analysis.
A visual search system for video and image databases.
A feature-based algorithm for detecting and classifying scene breaks
knowledge-based macro-segmentation of video into sequences
A shot classification method of selecting effective key-frame for video browsing
Extracting story units from long programs for video browsing and navigation.
Clustering methods for video browsing and annotation.
Modeling and querying video data.
Cinematic primitives for multimedia.
Object composition and playback models for handling multimedia data.
Knowledge guided parsing in video databases.
Developing power tools for video indexing and retrieval.
Image indexing and retrieval based on color histogram.
Constructing table-of-cont for videos
A content-based scene change detection and classification technique using background tracking
The laplacian pyramid as a compact image code.
2psm: An efficient framework for searching video information in a limited-bandwidth environment
The moving image genre-form guide
Information Retrieval - Data Structures and Algorithms
Digital video segmentation.
Videoq: An automated content based video search system using visual cues.
Exploring video structure beyond the shots.
Lessons learned from building terabyte digital video library.
--TR
Information retrieval
Object composition and playback models for handling multimedia data
Digital video segmentation
A feature-based algorithm for detecting and classifying scene breaks
Automating the creation of a digital video library
A shot classification method of selecting effective key-frames for video browsing
CONIVAS
knowledge-based macro-segmentation of video into sequences
Lessons Learned from Building a Terabyte Digital Video Library
WVTDB-A Semantic Content-Based Video Database System on the World Wide Web
Modelling and Querying Video Data
Constructing table-of-content for videos
A visual search system for video and image databases
Exploring Video Structure Beyond The Shots
--CTR
Kien A. Hua , JungHwan Oh, Detecting video shot boundaries up to 16 times faster (poster session), Proceedings of the eighth ACM international conference on Multimedia, p.385-387, October 2000, Marina del Rey, California, United States
JungHwan Oh , Maruthi Thenneru , Ning Jiang, Hierarchical video indexing based on changes of camera and object motions, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Zaher Aghbari , Kunihiko Kaneko , Akifumi Makinouchi, Topological mapping: a dimensionality reduction method for efficient video search, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Haoran Yi , Deepu Rajan , Liang-Tien Chia, A motion based scene tree for browsing and retrieval of compressed videos, Proceedings of the 2nd ACM international workshop on Multimedia databases, November 13-13, 2004, Washington, DC, USA
Mohamed Abid , Michel Paindavoine, A real-time shot cut detector: Hardware implementation, Computer Standards & Interfaces, v.29 n.3, p.335-342, March, 2007
JeongKyu Lee , JungHwan Oh , Sae Hwang, Scenario based dynamic video abstractions using graph matching, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Aya Aner-Wolf , John R. Kender, Video summaries and cross-referencing through mosaic-based representation, Computer Vision and Image Understanding, v.95 n.2, p.201-237, August 2004
Haoran Yi , Deepu Rajan , Liang-Tien Chia, A motion-based scene tree for browsing and retrieval of compressed videos, Information Systems, v.31 n.7, p.638-658, November 2006
Yu-Lung Lo , Wen-Ling Lee , Lin-Huang Chang, True suffix tree approach for discovering non-trivial repeating patterns in a music object, Multimedia Tools and Applications, v.37 n.2, p.169-187, April 2008 | video retrieval;video indexing;video similarity model;shot detection;video browsing |
336059 | Information dependencies. | This paper uses the tools of information theory to examine and reason about the information content of the attributes within a relation instance. For two sets of attributes X and Y, an information dependency measure (InD measure) characterizes the uncertainty remaining about the values for the set Y when the values for the set X are known. A variety of arithmetic inequalities (InD inequalities) are shown to hold among InD measures; InD inequalities hold in any relation instance. Numeric constraints (InD constraints) on InD measures, consistent with the InD inequalities, can be applied to relation instances. Remarkably, functional and multivalued dependencies correspond to setting certain constraints to zero, with Armstrong's axioms shown to be consequences of the arithmetic inequalities applied to constraints. As an analog of completeness, for any set of constraints consistent with the inequalities, we may construct a relation instance that approximates these constraints within any positive &egr;. InD measures suggest many valuable applications in areas such as data mining. | Introduction
That the well-developed discipline of information theory seemed to have so little to say about information
systems is a long-standing conundrum. Attempts to use information theory to "measure"
the information content of a relation are blocked by the inability to accurately characterize the
underlying domain. An answer to this mystery is that we have been looking in the wrong place.
The tools of information theory, dealing closely with representation issues, apply within a relation
instance and between the various attributes of that instance.
The traditional approach to information theory is based upon communication via a channel. In
each instance there is a fixed set of messages when one of these is transmitted
from the sender to the receiver (via the channel), the receiver gains a certain amount of information.
The less likely a message is to be sent, the more meaningful is its receipt. This is formalized by
assigning to each message v i a probability p i (subject to the natural constraint that
and defining the information content of v i to be log 1 / p i (all logarithms in this paper are base 2).
Another way of viewing this measure is that the amount of information in a message is related
to how "surprising" the message is-a weather report during the month of July contains little
information if the prediction is "hot," but a prediction of "snow" carries a lot of information.
The issue of surprise is also related to the recipient's ``state of knowledge.'' In the weather report
example, the astonishment of the report "snow" was directly related to the knowledge that it
was July; in January the information content of the two reports would be vastly different. Thus
the in- or inter-dependence of two sets of messages is highly significant. If two message sets are
independent ( in the intuitive and the statistical sense), receipt of a message from one set does
not alter the information content of the other (e.g. temperature and wind speed). If two message
sets are not independent, receipt of a message from the first set may greatly alter the likelihood
of receipt, and hence information content, of messages from the second set (e.g. temperature and
form of precipitation).
A central concept in information theory is the entropy H of a set of messages, the weighted
average of the message information:
Definition 1.1 Entropy. Given a set of messages with probabilities
g, the entropy of M is
Entropy is closely related to encoding of messages, in that encoding each v i using log 1 / p i bits
gives the minimal number of expected bits for transmitting messages of M .
Remark 1 Suppose for messages of M , no probability is 0 and
M contains a single message.
In a database context, information content is measured in terms of selection (specification of
a specific value) rather than transmission. This avoids the thorny problem which seems to say
that, since the database is stored on site and no transmission occurs, there is no information. In
particular, the model looks at an instance of a single relation and at values for some arbitrarily
selected tuple. For simplicity, we assume that the message source is ergodic-all tuples are equally
likely; a probability distribution could be applied to the tuples with less impact on the formalism
than on the intuition. Because of the assumption that all tuples are equally likely, the information
required to specify one particular tuple from a relation instance with n tuples is, of course, log n
and the minimal cost of encoding requires uniformly log n bits. We treat an attribute A as the
equivalent of a message source, where the message set is the active domain and each value v i has
probability c i/ n , when v i occurs c i times. Thus a single value carries log n bits only if it is drawn
from an attribute which has n distinct values, that is, when the attribute is a key. The class
standing code at a typical four-year college has approximately two bits of information (somewhat
less, to the extent that attrition has skewed enrollment) while gender at VMI has little information
(using the entropy measure, since the information content of the value "female" is high, but its
receipt is unlikely).
The major results of this paper use the common definition of information to characterize information
dependencies. This characterization has three steps. The first extends the use of entropy as a
measure of information to be an information dependency measure (Section 4). The second derives a
number of arithmetic inequalities which must always hold between particular measures in a relation
instance (Section 5). The third investigates the consequences of placing numeric constraints on some
or all measures of a relation instance. Most significantly, functional and multi-valued dependency
result from constraining certain particular measures (or their differences) to zero (Section 6).
For example, in a weather report database, month has entropy 3.58 and we might discover that
condition has entropy 1.9. But in a fixed month, condition has entropy approximately 1.6.
Thus knowing the value of month contributes approximately 0.3 bits of information to knowledge of
condition, with 1.6 bits of uncertainty. On the other hand, in a personnel database where EmpID
provides the entire information content of salary with 0 bits uncertain.
In addition, the measure/constraint formulation exhibits an analog of completeness in that, for
any set of numeric constraints consistent with the arithmetic inequalities and any positive ffl, there
is a relation instance that achieves those constraints within ffl (Section 7).
This characterization of information dependency has many important theoretic and practical im-
plications. It allows us to more carefully investigate notions of approximate functional dependency.
It can help with normalization. It opens up whole realms of data mining approaches.
Preliminaries
Here are the notations and conventions.
Relations All relation instances are non-empty and multi-sets. r, s denote instances 1 . Operators
oe do not filter for distinctiveness.
Attributes R is schema for instance r and X;Y; Z; V; W ' R. XY denotes X [ Y and A is
equivalent to fAg for A 2 R. X;Y; Z partition R.
Values v is equivalent to hvi when hvi 2 A (r). enumerates the
instances of distinct( X (r)), so 1 i ', similarly for m and y j wrt Y , and n and z k wrt Z.
Probabilities
count(r) for S ' R. use of i is consistent
with above), similarly
Two central notions to entropy are conditional probability and statistical independence. Conditional
probability allows us to make a possibly more informed probability measure of a set of values
by narrowing the scope of overall possibilities. Independence establishes a bound on how informed
the conditional probability enables us to be.
Definition 2.1 Conditional Probability. The conditional probability of
in the instance oe X=x i
(r). In symbols,
Definition 2.2 Independence. X;Y are independent if P
In this paper, there are log function expressions of the form log(1=0). By convention (continuity
real number a ? 0.
Lemma 2.1 log x
Lemma 2.2 Let be a probability distribution and such that
log q i/ p i q i/
Null values are not considered here.
3 The bounds on entropy
To ease notation, we write HX for H(X). From now on, we understand that H is always associated
with a non-empty instance r. When r is not clear from context, we write H r X . In the remainder of
this section, we establish upper and lower bounds on the entropy function.
Lemma 3.1 Upper and Lower Bounds on Entropy. 0 HX log '.
consequently, HX 0. Suppose
By Lemma 2.2, log 1 / 'p
Intuitively, the entropy of a set X ' R equal to zero signifies that there exists no uncertainty or
information, whereas, equal to log ' signifies complete uncertainty or information. A consequence
of our notation allows us to find the joint entropy of sets X;Y ' R. The joint entropy of X;Y ,
written HXY , is
Lemma 3.2 Bounds on Joint Entropy. X;Y ' R,
with HX +H are independent.
PROOF First inequality:
When X and Y are independent, p thus, the inequality in the above deduction is
in fact equality.
Second inequality: Observe that
mg. Then for any j,
consequently, log 1 / p i;j log 1 / q i and thus,
and symmetrically for H Y as well.
4 InD measures
An information dependency measure (InD measure) between X and Y , for X;Y ' R, attempts
to answer the question "How much do we not know about Y provided we know X?" Using the
notation of Section 2, if we know that then we are possibly more informed about
and therefore, can recalculate the entropy of Y as
log
Amortizing this over each of the ' different X values according to the respective probabilities
gives the entropy of Y dependent on X, resulting in the following definition of an information
dependency measure. Note that these are measures, not metrics.
r
a e
a f
a e
a f
c g
InD measures
H A
Figure
1: (left) An instance r. (right) InD measures of r. Observe that H
Definition 4.1 Information Dependency Measure. The information dependency measure (InD
measure) of Y given Xis HX!Y ,
We will now normally drop the word "entropy" when referring to these measures, but that this
value is not a declaration of dependency (as is the case with FDs) but a measure of dependency is
important to keep in mind. We now characterize an InD measure HX!Y in terms of InD measures
HX and HXY .
Lemma 4.1
HX!Y
Note that HX!Y is a measure of the information needed to represent Y given that X is known,
not the information that X contains about Y . This latter quantity of course is measured by
5 InD measure inequalities
The relationships among InD measures are characterized by inequalities and expressions involving
the various measures. Of these formulae, several are named according to the corresponding
functional dependency inference rules, which they characterize under special circumstances.
Lemma 5.1 Reflexivity.
X. Then by Lm 4.1
Lemma 5.2
d: 111 111
Figure
2: Encodings of A, B, B given A from Fig.1. The u contains the portion of the bit string that encodes
similarly for B. Where u
t overlap shows the portion of the encoding of B that is contained within the
encoding of A. The surprise after receiving A=a is witnessed by the fact that, although we know we will receive
the first bit of B=e or B=f, i.e. 0, we need an additional 1=4 bits for both the second bit of B=e and B=f.
Receipt of A=b,A=c, or A=d, on the other hand, poses no surprise since B=g is completely contained therein.
illustrate the situation: two InDs may interact little so they combine to sum their InDs, or they
may interact strongly, so their combination yields total dependencies. Putting restrictions on the
left- or right-hand sides constrains the interactions and hence tightens the InD relationships.
Lemma 5.3 Union (left). HX!Y +HX!Z HX!Y Z with equality if p jji and p kji are independent
Lemma 5.4 HX!Y
Lemma 5.5 HXY!Z HX!Z .
HXY!Z
HX!Z
Lemma 5.6 Union (right). min(HX!Z
HX!Z HXY!Y Lm 5.5
Lemma 5.7 Augmentation (1). HXZ!YZ HX!Y .
HX!Y Lm 5.5
Lemma 5.8 Transitivity. HX!Y +H Y !Z HX!Z .
HX!XY +HXY!XZ Lm 5.7
Lemma 5.9 Union (full). HX!Y +HW!Z HXW!YZ
HXW!YW +HWY!ZY Lm 5.7
HXW!YZ Lm 5.8
Lemma 5.10 Decomposition. if Z ' Y , then HX!Y HX!Z .
HX!Y HX!Z
Lemma 5.11 Psuedotransitivity. HX!Y +HWY!Z HXW!Z .
HXW!YW +HWY!Z Lm 5.7
HXW!Z Lm 5.8
Lemma 5.12 For XY
HWX!Y Z .
PROOF By Lm 5.2 we may assume wlog V ' W ' Y [ Z. Let
Z +H XZ
Y
Z +H XZ
Y
Y
6 FDs, MVDs, and
Armstrong's axioms
6.1 Functional dependencies
Functional dependencies (FDs) are long-known and well-studied [8, 10]. For X;Y ' R, X functionally
determines Y , written value yields a single Y value.
PROOF Recasting this in terms of probabilities, given any x i 2 X, there is a single y
such that p i;j ? 0, and consequently p
a singleton; hence,
6.2 Armstrong's axioms
Armstrong's axioms [8] are important for functional dependency theory because they provide the
basis for a dependency inferencing system. There are commonly three rules given as the Armstrong
Axioms, which are merely specializations of the above inequalites.
1. Reflexivity If Y ' X then
2. Augmentation
3. Transitivity
Theorem 6.1 The Armstrong axioms can be derived directly from InD inequalities.
PROOF Reflexivity follows directly from Lm 5.1, augmentation from Lm 5.7, and transitivity from
Lm 5.8.
An additional three rules derived from the axioms are often cited as fundamental: union, psue-
dotransitivity, decomposition. These also follow from Lm 5.3, Lm 5.11, and Lm 5.10 respectively.
Interestingly, a critical distinction between Armstrong's axioms and InD inequalities is that in the
former, union can be derived from the original three axioms, whereas the latter union must be
derived from first principles.
6.2.1 Fixed arity dependencies
Lemma 6.1 for FDs is alternatively a statement about the number of distinct values any x i 2 X
determines (we work through an example to motivate this). In the case of FDs and
in a non-empty r. In practice, however, the
size is often not unity and FDs are ill-suited for this e.g., consider a fParent; Childg relation r.
Biologically count-distinct( Parent (oe Child=c child c 2 Child. InDs measures
can be used to model this dependency easily; H Child!Parent
6.3 Multivalued dependencies
In the following, X;Y;Z partition R. Multivalued dependencies (MVDs) arise naturally in database
design and are intimately related to the (natural) join operator . A multivalued dependency,
Intuitively, we see that the values of Y and Z are
not related to each other wrt an particular value of X.
Lemma 6.2 MVD count. Assume X i Y in r. Then for all x
PROOF By definition of MVDs.
Lemma 6.3 X i Y jZ holds iff HX!Y
6.2, the conditional probabilities of Y; Z wrt X must be independent, which
is the condition required in Lemma 5.3 for equality to hold.
(:By Lemma 5.3 for equality, the conditional probabilities of Y; Z wrt X are independent; hence,
by Lemma 6.2, X i Y .
Since acyclic join dependencies can be characterized by a set of MVDs, it is clear that InD
inequalities can characterize them as well, though the "work" is really done by the characterization
of the set of MVDs.
6.4 Additional InD inference rules
There are three standard rules of MVD inference:
1. Complementation If
2. Augmentation For
3. Transitivity If
Both complementation and augmentation trivially true under InD inequalies. The last rule, tran-
sitivity, is rather interesting. For its proof, we find an alternative characterization of MVDs.
Intuitively, the proof establishes that .
HX!Y +HX!Z= HX!Y Z Lm 6.3
HX!Y +HX!Z= HX!Y +HXY!ZLm 5.4
Interestingly, this is an alternative characterization of MVDs. In this case, Y does not contribute
any information about Z.
Lemma
Lemma 6.6 As a consequence of Lm 6.5,
Lemma 6.7 If Y iW jV X, then XY iW jV by Lm 5.12.
Lemma 6.8 Let
HX!R
HX!R
Lemma 6.9 Transitivity for MVDs.
HX!R
6.5 Rules involving both FDs and MVDs
There are a pair of rules that allow mixing of FDs and MVDs:
1. Conversion
2. Interaction
The rule for conversion is trivial. Interaction follows from Lm 6.4.
In Section 6.2, we stated a critical difference between Armstrong axioms and InD inequalities
was the distinction between what were axioms and derivable rules. Additionally, there appear to
be other fundamental differences between FDs and MVDs, and InD inequalities. For example,
consider the following problem. Let R be a schema and set of FDs
over R. Let I(R; F ) be the set of all relation instances over R that satisfy F . For X ' R, let
)g. The question is whether there exists a set G of FDs over X
such that \Pi X (I(R; F G). It is known that in general such a G does not exist. Further,
a similar negative result holds for MVDs. InD measures are a broader class than FDs and MVDs,
and the expectation is that a theorem holds: it does, trivially since all relation instances satisfy
any set of InD inequalities.
7 InD measure constraints
To summarize the previous sections, we have defined InD measures on an instance, values that
reflect how much information is additionally required about a second set of attributes given a
first set. We have proved a number of arithmetic equalities and inequalities between various InD
measures for a given schema; these (in)equalities must hold for any instance of that schema. And we
have shown that constraining certain InD measures, or simple expressions involving InD measures,
to 0 imposes functional or multivalued dependences on the instances. We now generalize this last
step by considering arbitrary numeric constraints upon InD measures, e.g., HX!Y 4=9. A
relation instance r over R ' fX; Y g is a solution to this constraint if H r
X!Y 4=9 by standard
arithmetic. Formally,
Definition 7.1 An InD constraint system over schema R is an m \Theta n linear system
a
a
The constraint system is characterized by
will be written as AHX b, where Transpose . Observe that Definition 7.1 is
sufficient to describe any InD measure or inequality. InD constraint systems can be as simple as
requiring a single FD or as extensive as specifying the entropies of all subsets of R. However, not
every A, b, and X make sense as applied to a relation instance. Either the A and b may admit
no solutions (e.g. or the solutions may violated the InD measure
constraints for X (e.g.
Definition 7.2 An InD constraint system A, b, X is feasible provided that the linear system A,
b plus all InD measure constraints inferable from X is solvable.
Observe that a solution to this extended system involves finding values for each HX
7.1 Instances for feasible constraint systems
The question naturally arises whether an instance always exists for a feasible constraint system.
The affirmative answer to this question, whose proof is sketched below, provides InD measures with
an analog to completeness.
Before venturing into the proof of the theorem itself, we prove a simple result merely for the
sake of providing intuition for what comes after. There are two things to be observed while reading
the following proof: first, the duality between instance counts and approximate probabilities, and,
second, the way interpolation occurs.
Lemma 7.1 Given a rational c 0, there exists a relation instance r over a single attribute A
such that jH r
1). By the intermediate value theorem, since f is a continuous
function on the interval [0; 1], and c is a value between f(0) and f(1), then there exists some
a 2 [0; 1] such that a) is the
probability distribution. From this distribution we can approximate r by constructing an instance
" r over fAg with distinct values that is sufficiently large such that if
count(oe A=i ("
While this proof is non-constructive, we can find a suitable x by, for example, binary search.
Theorem 7.1 Instance existence. For any feasible constraint system A, b, and X, and any ffl ? 0,
there is a relation instance r that satisfies A, b, and X within ffl.
1. Using the observation from Definition 7.2, solve A, b, and X for fixed values for
2. Pick m ? 1=ffl
3. Give every attribute a value with large probability, namely is the number
of attributes. Note that these highly probable attributes contribute a negligible amount to any
entropy since their probabilities are so close to 1.
4. The remaining probabilities for each attribute A i will be divided among b i equal size buckets.
Thus, HA i
log b i . Find b i such that ff
Remark 2 Wlog, the A i are ordered in decreasing entropy. Hence b i b i+1 .
We will add attributes in order A 1 ; A
5. At stage construction has included A and we are adding A i+1 ; that is, we
already have
and want to construct
. We also have a single distribution q
corresponding to A i+1 . We actually construct two distributions p' and pu, for "p lower" and
"p upper".
(a) The upper case is simple: A i+1 is independent from A
\Theta
(b) The lower case is found by allocating the q j among the various p's. Because b i b i+1 ,
there are more than enough i buckets to go around. With some small error, each non-zero
will correspond to a unique q 6= 0.
and by induction H p'
An
Interpolate between p ' and p u to match other entropies
This is conceptually similar to Lm 7.1, but relies upon the unusual structure of pu caused by
the almost-unity cases of p and q and another iteration.
8 Applications and extensions
We have presented a formal foundation incorporating information theory in relational databases.
There are many interesting and valuable applications and extensions of this work that we are
already pursuing.
8.1 Datamining
Datamining [3], the search for interesting patterns in large databases, motivated our initial work, our
interest in establishing what it means to be "interesting." A primary objective here is to certainly
find all the InD measures HX!Y ffi given an instance r over R. The search in r takes place upon
the lattice of h2 R ; 'i, where HX!Y ffi is checked for every X ( Y . The InD inequalities facilitate
this search.
Kivinen et. al. [4], considers finding approximate FDs. The central notion is that of violating
for an instance r over R and X;Y ' R, a pair of tuples s;
They define three normalized measures are based
upon the number of violating pairs, the number of violating tuples, and the number of violating
tuples removed to achieve a dependency, respectively. The authors state that problematically
the measures give very different values for some particular relations, and therefore, choosing
which measure is the best-if any are-is difficult. We feel that the InD measure can shed
some light upon the metrics. The connection between these measures and InD measures is illustrated
with three instances
r 1.52 .80 .16 .8 .4
s 1.37 .95 .36 .8 .4
This example shows that HX!Y can sometimes make finer distinctions than g i s. On the applications
side, Kivinen et. al have done substantial work related to approximate FDs as in [4]. The
paper is important not only for the notion of approximate dependency, but also a brief discussion
about how the errors can be cast into Armstrong Axiom-like inequalities.
8.2 Other Metrics
Rather than considering what information X lacks about Y , we may look at the information
contains about Y , that is "
its normalized form
I=H Y .
Some interesting results about I and "
I are max(IX!Y
I Y !X . While I makes the specification of FDs more natural
cannot be used to characterize MVDs. Another interesting measure that uses
additional notions from information theory is rate of the language
X =count(r) which is
the average number of bits required for each tuple projected on X. The absolute rate is s ab =
log(count(r)). The difference s ab \Gamma s indicates the redundancy. As X approaches R, the average
tuple entropy increases, reducing redundancy. This is pertinant especially to the following section.
8.3 Connections to relational algebra
Examining how InDs behave with relational operators. For example,
Lemma 8.1 Let R = fX; Y; Zg and r be an instance of R. if r
For instance, when employing a lossless decomposition, how will both the InD measures and rates
(from above) change to indicate the decomposition was indeed lossless.
9 Related work
There is a dearth of literature in this area, marrying information theory to information systems.
The closest work seems to be Piatesky-Shapiro in [2] who proposes a generalization of functional
dependencies, called probabilitistic dependency (pdep). The author begins with the
(using our notation). To relate two sets of attributes X;Y , pdep(X; Y
Observe that pdep approaches 1 as X comes closer to functionally determining Y . Since pdep is
itself inadequate, the author normalizes it using proportion in variation, resulting in the known
statistical measure (X; Y
Y is a better FD than Y ! X (and vice versa). The author describes the expectation
of both pdep effeciently sample for these values.
In the area of artificial intelligence, an algorithm developed to create decision trees, a means of
classification, by Quinlan, notably ID3 [5] and C4.5 [6] uses entropy to dictate how the building
should proceed. In this case of supervised learning, an attribute A is selected as the target, and
the remaining attributes R \Gamma fAg the classifier. The algorithm works by progressively selecting
attributes from the intial set R \Gamma fAg, measuring be classified properly.
Acknowledgements
The authors would like to thank Dennis Groth, Dirk Van Gucht, Chris Giannella, Richard Martin,
and C.M. Rood for their helpful suggestions.
--R
The Elements of Real Analysis Second Edition.
Probabilistic data dependencies.
From data mining to knowledge discovery: An overview.
Approximate inference of functional dependencies from relations.
Induction of decision trees.
Coding and Information Theory.
Foundations of Databases.
Elements of Information Theory.
Principles of Database and Knowledge-Base Systems Vol
--TR
Principles of database and knowledge-base systems, Vol. I
Coding and information theory
Elements of information theory
C4.5: programs for machine learning
Approximate inference of functional dependencies from relations
From data mining to knowledge discovery
Bottom-up computation of sparse and Iceberg CUBE
Foundations of Databases
Data Cube
Induction of Decision Trees
Recovering Information from Summary Data
--CTR
Chris Giannella , Edward Robertson, A note on approximation measures for multi-valued dependencies in relational databases, Information Processing Letters, v.85 n.3, p.153-158, 14 February
Ullas Nambiar , Subbarao Kambhampati, Mining approximate functional dependencies and concept similarities to answer imprecise queries, Proceedings of the 7th International Workshop on the Web and Databases: colocated with ACM SIGMOD/PODS 2004, June 17-18, 2004, Paris, France
Marcelo Arenas , Leonid Libkin, An information-theoretic approach to normal forms for relational and XML data, Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.15-26, June 09-11, 2003, San Diego, California
Periklis Andritsos , Rene J. Miller , Panayiotis Tsaparas, Information-theoretic tools for mining database structure from large data sets, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Luigi Palopoli , Domenico Sacc , Giorgio Terracina , Domenico Ursino, Uniform Techniques for Deriving Similarities of Objects and Subschemes in Heterogeneous Databases, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.271-294, February
Marcelo Arenas , Leonid Libkin, An information-theoretic approach to normal forms for relational and XML data, Journal of the ACM (JACM), v.52 n.2, p.246-283, March 2005
Solmaz Kolahi , Leonid Libkin, On redundancy vs dependency preservation in normalization: an information-theoretic study of 3NF, Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 26-28, 2006, Chicago, IL, USA
Bassem Sayrafi , Dirk Van Gucht, Differential constraints, Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 13-15, 2005, Baltimore, Maryland
Solmaz Kolahi , Leonid Libkin, XML design for relational storage, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Chris Giannella , Edward Robertson, On approximation measures for functional dependencies, Information Systems, v.29 n.6, p.483-507, September 2004 | multivalued dependency;armstrong's axioms;functional dependency;entropy;information dependency |
336271 | Efficient Interprocedural Array Data-Flow Analysis for Automatic Program Parallelization. | AbstractSince sequential languages such as Fortran and C are more machine-independent than current parallel languages, it is highly desirable to develop powerful parallelization-tools which can generate parallel codes, automatically or semiautomatically, targeting different parallel architectures. Array data-flow analysis is known to be crucial to the success of automatic parallelization. Such an analysis should be performed interprocedurally and symbolically and it often needs to handle the predicates represented by IF conditions. Unfortunately, such a powerful program analysis can be extremely time-consuming if not carefully designed. How to enhance the efficiency of this analysis to a practical level remains an issue largely untouched to date. This paper presents techniques for efficient interprocedural array data-flow analysis and documents experimental results of its implementation in a research parallelizing compiler. Our techniques are based on guarded array regions and the resulting tool runs faster, by one or two orders of magnitude, than other similarly powerful tools. | Introduction
Program execution speed has always been a fundamental concern for computation-intensive applications.
To exceed the execution speed provided by the state-of-the-art uniprocessor machines, programs need to
take advantage of parallel computers. Over the past several decades, much effort has been invested in
efficient use of parallel architectures. In order to exploit parallelism inherent in computational solutions,
progress has been made in areas of parallel languages, parallel libraries and parallelizing compilers. This
paper addresses the issue of automatic parallelization of practical programs, particularly those written in
imperative languages such as Fortran and C.
Compared to current parallel languages, sequential languages such as Fortran 77 and C are more
machine-independent. Hence, it is highly desirable to develop powerful automatic parallelization tools
which can generate parallel codes targeting different parallel architectures. It remains to be seen how far
automatic parallelization can go. Nevertheless, much progress has been made recently in the understanding
of its future directions. One important finding by many is the critical role of array data-flow analysis
[10, 17, 20, 32, 33, 37, 38, 42]. This aggressive program analysis not only can support array privatization
[29, 33, 43] which removes spurious data dependences thereby to enable loop parallelization, but it can
also support compiler techniques for memory performance enhancement and efficient message-passing
deployment.
Few existing tools, however, are capable of interprocedural array data-flow analysis. Furthermore,
no previous studies have paid much attention to the issue of the efficiency of such analysis. Quite
understandably, rapid prototyping tools, such as SUIF [23] and Polaris [4], do not emphasize compilation
efficiency and they tend to run slowly. On the other hand, we also believe it to be important to
demonstrate that aggressive interprocedural analysis can be performed efficiently. Such efficiency is
important for development of large-sized programs, especially when intensive program modification,
recompilation and retesting are conducted. Taking an hour or longer to compile a program, for example,
would be highly undesirable for such programming tasks.
In this paper, we present techniques used in the Panorama parallelizing compiler [35] to enhance the
efficiency of interprocedural array data-flow analysis without compromising its capabilities in practice. We
focus on the kind of array data-flow analysis useful for array privatization and loop parallelization. These
are important transformations which can benefit program performance on various parallel machines. We
make the following key contributions in this paper:
ffl We present a general framework to summarize and to propagate array regions and their access
conditions, which enables array privatization and loop parallelization for Fortran-like programs
which contain nonrecursive calls, symbolic expressions in array subscripts and loop bounds, and IF
conditions that may directly affect array privatizability and loop parallelizability.
ffl We show a hierarchical approach to predicate handling, which reduces the time complexity of
analyzing the predicates which control different execution paths.
ffl We present experimental results to show that reducing unnecessary set-difference operations contributes
significantly to the speed of the array data-flow analysis.
ffl We measure the analysis speed of Panorama when applied to application programs in the Perfect
benchmark suite [3], a suite that is well known to be difficult to parallelize automatically. As a way
to show the quality of the parallelized code, we also report the speedups of the programs parallelized
by Panorama and executed on an SGI Challenge multiprocessor. The results show that Panorama
runs faster, by one or two orders of magnitude, than other known tools of similar capabilities.
We note that in order to achieve program speedup, additional program transformations often need to be
performed in addition to array data-flow analysis, such as reduction-loop recognition, loop permutation,
loop fusion, advanced induction variable substitution and so on. Such techniques have been discussed
elsewhere and some of them have been implemented in both Polaris [16] and more recently in Panorama.
The techniques which are already implemented consume quite insignificant portion of the total analysis
and transformation time, since array data-flow analysis is the most time-consuming part. Hence we do
not discuss their details in this paper.
The rest of the paper is organized as follows. In Section 2, we present background materials for
interprocedural array data-flow analysis and its use for array privatization and loop parallelization. We
point out the main factors in such analysis which can potentially slow down the compiler drastically. In
Section 3, we present a framework for interprocedural array data-flow analysis based on guarded array
regions. In Section 4, we discuss several implementation issues. We also briefly discuss how array data-flow
analysis can be performed on programs with recursive procedures and dynamic arrays. In Section 5, we
discuss the effectiveness and the efficiency of our analysis. Experimental results are reported to show the
parallelization capabilities of Panorama and its high time efficiency. We compare related work in Section
6 and conclude in Section 7.
Background
In this section, we briefly review the idea of array privatization and give reasons why an aggressive
interprocedural array data-flow analysis is needed for this important program transformation.
2.1 Array Privatization
If a variable is modified in different iterations of a loop, writing conflicts result when the iterations are
executed by multiple processors. Quite often, array elements written in one iteration of a DO loop are
used in the same iteration before being overwritten in the next iteration. This kind of arrays usually
END DO
Figure
1: A Simple Example of Array Privatization
serve as a temporary working space within an iteration and the array values in different iterations are
unrelated. Array privatization is a technique that creates a distinct copy of an array for each processor
such that the storage conflict can be eliminated without violating program semantics. Parallelism in the
program is increased. Data access time may also be reduced, since privatized variables can be allocated
to local memories. Figure 1 shows a simple example, where the DOALL loop after transformation is to
be executed in parallel. Note that the value of A(1) is copied from outside of the DOALL loop since
A(1) is not written within the DOALL loop. If the values written to A(k) in the original DO loop are
live at the end of the loop nest, i.e. the values will be used by statements after the loop nest, additional
statements must be inserted in the DOALL loop which, in the last loop iteration, will copy the values of
to A(k). In this example, we assume A(k) are dead after the loop nest, hence the absence of the
copy-out statements.
Practical cases of array privatization can be much more complex than the example in Figure 1. The
benefit of such transformation, on the other hand, can be significant. Early experiments with manually-
performed program transformations showed that, without array privatization, program execution speed
on an Alliant FX/80 machine with 8 vector processors would be slowed down by a factor of 5 for programs
MDG, OCEAN, TRACK and TRFD in the well-known Perfect benchmark suite [15]. Recent experiments
with automatically transformed codes running on an SGI Challenge multiprocessor show even more
striking effects of array privatization on a number of Perfect benchmark programs [16].
2.2 Data Dependence Analysis vs. Array Data-flow Analysis
Conventional data dependence analysis is the predecessor of all current work on array data-flow analysis.
In his pioneering work, D.J. Kuck defines flow dependence, anti- dependence and output dependence [26].
While the latter two are due to multi-assignments to the same variable in imperative languages, the flow
dependence is defined between two statements, one of which reads the value written by the other. Thus,
the original definition of flow dependence is precisely a reaching-definition relation. Nonetheless, early
compiler techniques were not able to compute array reaching-definitions and therefore, for a long time,
flow dependence is conservatively computed by asserting that one statement depends on another if the
former may execute after the latter and both may access the same memory location. Thus, the analysis of
all three kinds of data dependences reduces to the problem of memory disambiguation, which is insufficient
for array privatization.
Array data-flow analysis refers to computing the flow of values for array elements. For the purpose of
array privatization and loop parallelization, the parallelizing compiler needs to establish the fact that, as
in the case in Figure 1, no array values are written in one iteration but used in another.
2.3 Interprocedural Analysis
In order to increase the granularity of parallel tasks and hence the benefit of parallel execution, it
is important to parallelize loops at outer levels. Unfortunately, such outer-level loops often contain
procedure calls. A traditional method to deal with such loops is in-lining, which substitutes procedure
calls with the bodies of the called procedures. Illinois' Polaris [4], for example, uses this method.
Unfortunately, many important compiler transformations increase their consumed time and storage
quadratically, or at even higher rates, with the number of operations within individual procedures.
Hence, there is a severe limit on the feasible scope of in-lining. It is widely recognized that, for large-scale
applications, often a better alternative is to perform interprocedural summary analysis instead of in-lining.
Interprocedural data dependence analysis has been discussed extensively [21, 24, 31, 40]. In recent years
we have seen increased efforts on array data-flow analysis [10, 17, 20, 32, 33, 37, 38, 42]. However, few
tools are capable of interprocedural array data-flow analysis without in-lining [10, 20, 23].
2.4 Complications of Array Data-flow Analysis
In reality, a parallelizing compiler not only needs to analyze the effects of procedure calls, but it may also
need to analyze relations among symbolic expressions and among branching conditions.
The examples in Figure 2 illustrate such cases. In these three examples, privatizing the array A will
make it possible to parallelize the I loops. Figure 2(a) shows a simplified loop from the MDG program
(routine interf) [3]. It is a difficult example which requires certain kind of inference between IF conditions.
Although both A and B are privatizable, we will discuss A only, as B is a simple case. Suppose that the
condition kc:NE:0 is false and, as the result, the last loop K within loop I gets executed and A(6 :
gets used. We want to determine whether A(6 : may use values written in previous iterations of loop I .
Condition kc:NE:0 being false implies that, within the same iteration of I , the statement
not executed. Thus, B(K):GT:cut2 is false for all 9 of the first DO loop K. This fact further
implies that B(K + 4):GT:cut2 is false for of the second DO loop K, which ensures that
gets written before its use in the same iteration I . Therefore, A is privatizable in loop I .
Figure
2(b) illustrates a simplified version of a segment of the ARC2D program(routine filerx)[3]. The
DO I=1, nmol1
DO K=1,9
ENDDO
DO K=2,5
1: ENDDO
DO K=11,14
ENDDO
2:
ENDDO
ENDDO
IF (.NOT.p)
ENDDO
ENDDO
call in(A, m)
call out(A,m)
ENDDO
IF (x?SIZE) RETURN
ENDDO
SUBROUTINE out(B, mm)
IF (x?SIZE) RETURN
ENDDO
END
(a) (b) (c)
Figure
2: More Complex Examples of Privatizable Arrays.
condition :NOT:p is invariant for DO loop I . As the result, if A(jmax) is not modified in one iteration,
thus exposing its use, then A(jmax) should not be modified in any iteration. Therefore, A(jmax) never
uses any value written in previous iterations of I . Moreover, it is easy to see that the use of A(jlow : jup)
is not upward exposed. Hence, A is privatizable and loop I is a parallel loop. In this example, the
IF condition being loop invariant makes sure that there is no loop-carried flow dependence. Otherwise,
whether a loop-carried flow dependence exists in Figure 2(b) depends upon the IF condition.
Figure
2(c) shows a simplified version of a segment of the OCEAN program(routine ocean)[3]. Interprocedural
analysis is needed for this case. In order to privatize A in the I loop, the compiler must
recognize the fact that if a call to out in the I loop does use A(1 : m), then the call to in in the same
iteration must modify A(1 : m), so that the use of A(j) must take the value defined in the same iteration
of I . This requires to check whether the condition x ? SIZE in subroutine out can infer the condition
x ? SIZE in subroutine in. For all three examples above, it is necessary to manipulate symbolic
operations. Previous and current work suggests that the handling of conditionals, symbolic analysis and
interprocedure analysis should be provided in a powerful compiler.
Because array data-flow analysis must be performed over a large scope to deal with the whole set
of the subroutines in a program, algorithms for information propagation and for symbolic manipulation
must be carefully designed. Otherwise, this analysis will simply be too time-consuming for practical
compilers. To handle these issues simultaneously, we have designed a framework which is described next.
3 Array Data-Flow Analysis Based on Guarded Array Regions
In traditional frameworks for data-flow analysis, at each meet point of a control flow graph, data-flow
information from different control branches is merged under a meet operator. Such merged information
typically does not distinguish information from different branches. The meet operator can be therefore
said to be path-insensitive. As illustrated in the last section, path-sensitive array data-flow information
can be critical to the success of array privatization and hence loop parallelization. In this section,
we present our path-sensitive analysis that uses conditional summary sets to capture the effect of IF
conditions on array accesses. We call the conditional summary sets guarded array regions (GAR's).
3.1 Guarded Array Regions
Our basic unit of array reference representation is a regular array region.
Definition A regular array region of array A is denoted by A(r is the dimension
of A and r i , is a range in the form of (l being symbolic expressions. The
all values from l to u with step s, which is simply denoted by (l) if l = u and by
array region is represented by ;, and an unknown array region is represented
by
The regular array region defined above is more restrictive than the original regular section proposed by
Callahan and Kennedy [6]. The regular array region does not contain any inter-dimensional relationship.
This makes set operations simpler. However, a diagonal and a triangular shape of an array cannot be
represented exactly. For instance, for an array diagonal A(i; i),
triangular are approximated by the same regular array region:
Regular array regions can cover the most frequent cases in real programs and they seem to have an
advantage in efficiency when dealing with the common cases. The guards in GAR's (defined below) can
be used to describe the more complex array sections, although their primary use is to describe control
conditions under which regular array regions are accessed.
Definition A guarded array region (GAR) is a tuple [P; R] which contains a regular array region R and
a guard P , where P is a predicate that specifies the condition under which R is accessed. We use \Delta to
denote a guard whose predicate cannot be written explicitly, i.e. an unknown guard. If both
we say that the GAR [P; R]
=\Omega is unknown. Similarly, if either P is F alse or R is ;, we say that
In order to preserve as much precision as possible, we try to avoid marking a whole array region as
unknown. If a multi-dimensional array region has only one dimension that is truly unknown, then only
that dimension is marked as unknown. Also, if only one item in a range tuple (l say u, is
unknown, then we write the tuple as (l
Let a program segment, n, be a piece of code with a unique entry point and a unique exit point. We
use results of set operations on GAR's to summarize two essential pieces of array reference information
for n which are listed below.
ffl UE(n): the set of array elements which are upwardly exposed in n if these elements are used in n
and they take the values defined outside n.
ffl MOD(n): the set of array elements written within n.
In addition, the following sets, which are also represented by GAR's, are used to describe array references
in a DO loop l with its body denoted by b:
the set of the array elements used in an arbitrary iteration i of DO loop l that are upwardly
exposed to the entry of the loop body b.
the subset of array elements in UE i (b) which are further upwardly exposed to the entry of
the DO loop l.
ffl MOD i (b): the set of the array elements written in loop body b for an arbitrary iteration i of DO
loop l. Where no confusion results, this may simply be denoted as MOD i .
the same as MOD i (b).
ffl MOD !i (b): the set of the array elements written in all of the iterations prior to an arbitrary
iteration i of DO loop l. Where no confusion results, this may simply be denoted as MOD !i .
ffl MOD !i (l): the same as MOD !i (b).
ffl MOD ?i (b): the set of the array elements written in all of the iterations following an arbitrary
iteration i of DO loop l. Where no confusion results, this may simply be denoted as MOD ?i .
ffl MOD ?i (l): the same as MOD ?i (b).
Take
Figure
2(c) for example. For loop J of subroutine in, UE j is empty and MOD j equals
[T rue; B(j)]. Therefore MOD !j is [1 !
The MOD for the loop J is [1 hence the MOD of subroutine in is
Similarly, UE j for loop J of subroutine out is [T rue; B(j)],
and UE for the same loop is [1 Lastly, UE of the subroutine out is [x -
Our data-flow analysis requires three kinds of operations on GAR's: union, intersection, and difference.
These operations in turn are based on union, intersection, and difference operations on regular array
regions as well as logical operations on predicates. Next, we will first discuss the operations on array
regions, then on GAR's.
3.2 Operations on Regular Array Regions
As operands of the region operations must belong to the same array, we will drop the array name from
the array region notation hereafter whenever there is no confusion. Given two regular array regions,
is the dimension of array A, we define the
following operations:
For the sake of simplicity of presentation, here we assume steps of 1 and leave Section 4 for discussion
of other step values. Let r 1
have
ae
(D
Note that we do not keep max and min operators in a regular array region. Therefore, when the
relationship of symbolic expressions can not be determined even after a demand-driven symbolic
analysis is conducted, we will mark the intersection as unknown.
Since these regions are symbolic ones, care must be taken to prevent false regions created by union
operations. For example, knowing R we have R 1
if and only if both R 1 and R 2 are valid. This can be guaranteed nicely by imposing validity
predicates into guards as we did in [20]. In doing so, the union of two regular regions should
be computed without concern for validity of these two regions. Since this introduces additional
predicate operations that we try to avoid, we will usually keep the union of two regions without
merging them until they, like constant regions, are known to be valid.
For an m-dimensional array, the result of the difference operation is generally 2 m regular regions
if each range difference results in two new ranges. This representation could be quite complex
for large m; however, it is useful to describe the general formulas of set difference operations.
first define R 1 (k) and R 2 (k),
as the last k ranges within R 1 and R 2 respectively. According to this definition, we
have R 1
). The computation of R 1 recursively given by the following
ae
The following are some examples of difference operations,
In order to avoid splitting regions due to difference operations, we routinely defer solving difference
operations, using a new data structure called GARWD to temporarily represent the difference results. As
we shall show later, using GARWD's keeps the summary computation both efficient and exact. GARWD's
are defined in the following subsection.
3.3 Operations on GAR's and GARWD's
Given two GAR's, we have the following:
The most frequent cases in union operations are of two kinds:
the union becomes [P 1
- If R the result is [P
If two array regions can not be safely combined due to the unknown symbolic terms, we keep two
GAR's in a list without merging them.
As discussed previously, R 1 may be multiple array regions, making the actual result of T
potentially complex. However, as we shall explain via an example, difference operations can often
be canceled by intersection and union operations. Therefore, we do not solve the difference
unless the result is a single GAR, or until the last moment when the actual result must be solved
in order to finish data dependence tests or array privatizability tests. When the difference is not
yet solved by the above formula, it is represented by a GARWD.
Definition A GAR with a difference list (GARWD) is a set defined by two components: a source GAR
and a difference list. The source GAR is an ordinary GAR as defined above, while the difference list is a
list of GAR's. The GARWD set denotes all the members of the source GAR which are not in any GAR
on the difference list. It is written as f source GAR, !difference list? g. 2
The following examples show how to use the above formulas:
which is a GARWD. Note that, if we cannot further postpone
A(1:N:1)=. denoted by (3)
denoted by (2)
denoted by (1)
ENDDO
ENDDO
Figure
3: An Example of GARWD's
solving of the above difference, we can solve it to
GARWD operations:
Operations between two GARWD's and between a GARWD and a GAR can be easily derived from
the above. For example, consider a GARWD gwd=fg ?g and a GAR g. The result of subtracting
g from gwd is the following:
1. fg 3
2.
3.
where g 3 is a single GAR. The first formula is applied if the result of (g exactly a single GAR g 3 .
Because g 1 and g may be symbolic, the difference result may not be a single GAR. Hence, we have the
third formula. Similarly, the intersection of gwd and g is:
1. fg 4
2. ;, if (g \Gamma
3. unknown otherwise.
where g 4 is also a single GAR.
The Union of two GARWD's is usually kept in the list, but it can be merged in some cases. Some
concrete examples are given below to illustrate the operations on GARWD's:
ENDDO
END
C=.
DO I=1,m
Figure
4: Example of the HSG
Figure
3 is an example showing the advantage of using GARWD's. The right-hand side is the summary
result for the body of the outer loop, where the subscript i in UE i and in MOD i indicates that these
two sets belong to an arbitrary iteration i. UE i is represented by a GARWD. For simplicity, we omit the
guards whose values are true in the example. To recognize array A as privatizable, we need to prove that
no loop-carried data flow exists. The set of all mods within those iterations prior to iteration i, denoted
by MOD !i , is equal to MOD i . (In theory, MOD which nonetheless does not invalidate
the analysis.) Since both GAR's in the MOD !i list are in the difference list of the GARWD for UE i , it
is obvious that the intersection of MOD !i and UE i is empty, and that therefore array A is privatizable.
We implement this by assigning each GAR a unique region number, shown in parentheses in Figure 3,
which makes intersection a simple integer operation.
As shown above, our difference operations, which are used during the calculation of UE sets, do not
result in the loss of information. This helps to improve the effectiveness of our analysis. On the other
hand, intersection operations may result in unknown values, due to the intersections of the sets containing
unknown symbolic terms. A demand-driven symbolic evaluator is invoked to determine the symbolic
values or the relationship between symbolic terms. If the intersection result cannot be determined by the
evaluator, it is marked as unknown.
In our array data-flow framework based on GAR's, intersection operations are performed only at the
last step when our analyzer tries to conduct dependence tests and array privatization tests, at the point
where a conservative assumption must be made if an intersection result is marked as unknown. The
intersection operations, however, are not involved in the propagation of the MOD and UE sets, and
therefore they do not affect the accuracy of those sets.
3.4 Computing UE and MOD Sets
The UE and MOD information is propagated backward from the end to the beginning of a routine or a
program segment. Through each routine, these two sets are summarized in one pass and the results are
(a)
in
out
(b)
out
in
U padd((MOD(S2) U MOD_IN(out)), ~p)
Figure
5: Computing Summary Sets for Basic Control Flow Components
saved. The summary algorithm is invoked on demand for a particular routine, so it will not summarize a
routine unless necessary. Parameter mapping and array reshaping are done when the propagation crosses
routine boundaries.
To facilitate interprocedural propagation of the summary information, we adopt a hierarchical supergraph
(HSG) to represent the control flow of the entire program. The HSG augments the supergraph
proposed by Myers [36] by introducing a hierarchy among nested loops and procedure calls. An HSG
contains three kinds of nodes: basic block nodes, loop nodes and call nodes. A DO loop is represented
by a loop node which is a compound node whose internal flow subgraph describes the control flow of the
loop body. A procedure call site is represented by a call node, which has an outgoing edge pointing to
the entry node of the flow subgraph of the called procedure and has an incoming edge from the unique
exit node of the called procedure. Due to the nested structures of DO loops and routines, a hierarchy for
control flow is derived among the HSG nodes, with the flow subgraph at the highest level representing the
main program. The HSG resembles the HSCG used by the PIPS project for parallel task scheduling [25].
Figures
4 shows an example of the HSG. Note that the flow subgraph of a routine is never duplicated
for different calls to the same routine, unless multiple versions of the called routine are created by the
compiler to enhance its potential parallelism. More details about the HSG and its implementation can
be found in reference [20, 18].
During the propagation of the array data-flow information, we use MOD IN (n) to represent the array
elements that are modified in nodes which are forwardly reachable from n (at the same or lower HSG
level as n), and we use UE IN (n) to represent the array elements whose values are imported to n and
are used in the nodes forwardly reachable from n. Suppose a DO loop l, with its body denoted by b, is
represented by a loop node N and the flow subgraph of b has the entry node n. We have UE i
(b) equal
to UE IN (n) and UE (N) equal to the expansion of UE i
(b) (see below). Similarly, we have MOD i
(b)
i=l,u,s
Loop: d
Figure
Expansion of Loop Summaries
equal to MOD IN (n) and MOD(N) equal to the expansion of MOD i
(b). The MOD and MOD IN sets
are represented by a list of GAR's, while the UE and UE IN sets by a list of GARWD's.
Figure
5 (a) and (b) show how the MOD IN and UE IN sets are propagated, in the direction opposite
to the control flow, through a basic block S and a flow subgraph for an IF statement (with the then-branch
S1 and the else-branch S2), respectively. During the propagation, variables appearing in certain summary
sets may be modified by assignment statements, and therefore their right-hand side expressions substitute
for the variables. For simplicity, such variable substitutions are not shown in Figure 5. Figure 5 (b) shows
that, when summary sets are propagated to IF branches, IF conditions are put into the guards on each
branch, and this is indicated by function padd() in the figure.
The whole summary process is quite straightforward, except that the computation of UE sets for loops
needs further analysis to support summary expansion, as illustrated by Figure 6.
Given a DO loop with index I, I 2 (l; u; s), suppose UE i and MOD i are already computed for an
arbitrary iteration i. We want to calculate UE and MOD sets for the entire I loop, following the formula
below:
MOD
The \Sigma summation above is also called an expansion or projection, denoted by proj() in Figure 6, which
is used to eliminate i from the summary sets. The UE calculation given above takes two steps. The first
step computes (UE which represents the set of array elements which are used in iteration i
and have been exposed to the outside of the whole I loop. The second step projects the result of Step 1
against the domain of i, i.e. the range (l to remove i. The expansion for a list of GAR's and a
list of GARWD's consists of the expansion of each GAR and each GARWD in the lists. Since a detailed
discussion on expansion would be tedious, we will provide a guideline only in this paper (see Appendix).
DO I1=1,100
DO I2=1,100
ENDDO
ENDDO
END
DO I1=1,N1-1
ENDDO
END
(a) (b)
Figure
7: Examples of Symbolic Expressions in Guarded Array Regions
Implementation Considerations and Extensions
4.1 Symbolic Analysis
Symbolic analysis handles expressions which involve unknown symbolic terms. It is widely used in
symbolic evaluation or abstract interpretation to discover program properties such as values of expressions,
relationships between symbolic expressions, etc. Symbolic analysis requires the ability to represent and
manipulate unknown symbolic terms. Among several expression representations, a normal form is often
used [7, 9, 22]. The advantage of a normal form is that it gives the same representation for congruent
expressions. In addition, symbolic expressions encountered in array data-flow analysis and dependence
analysis are mostly integer polynomials. Operations on integer polynomials, such as the comparison of
two polynomials, are straightforward. Therefore, we adopt integer polynomials as our representation for
expressions. Our normal form, which is essentially a sum of products, is given below:
where each I i is an index variable and t i is a term which is given by equation (2) below:
where p j is a product, c j is an integer constant (possible integer fraction), x j
k is an integer variable but
not an index variable, N is the nesting number of the loop containing e, M i is the number of products
in t i , and L j is the number of variables in p j .
Take the program segments in Figure 7 as examples. For subroutine SUB1, the MOD set of statement
contains a single GAR: [True, A(N1 I2)]. The MOD set of DO loop I2 contains [True,
100)]. The MOD set of DO loop I1 contains [True,
Lastly, the MOD set of the whole subroutine contains [True, A(N2 \Delta
For subroutine SUB2, the MOD set of statement S2 contains a single GAR: [True, A(I1)]. The MOD set
of DO loop I1 contains [N1 ? 1, 1)]. The MOD set of the IF statement contains [N1 ? N6 -
Lastly, the MOD set of the whole subroutine contains [N2
All expressions e, t i , and p j in the above are sorted according to a unique integer key assigned to each
variable. Since both M i and L j control the complexity of a polynomial, they are chosen as our design
parameters. As an example of using M i and L j to control the complexity of expressions, e will be a linear
expression (affine) if M i is limited to be 1 and L j to be zero. By controlling the complexity of expression
representations, we can properly control the time complexity of manipulating symbolic expressions.
Symbolic operations such as additions, subtractions, multiplications, and divisions by an integer
constant are provided as library functions. In addition, a simple demand-driven symbolic evaluation
scheme is implemented. It propagates an expression upwards along a control flow graph until the value
of expression is known or the predefined propagation limit is reached.
4.2 Range Operations
In this subsection, we give a detailed discussion of range operations for step values other than 1. To
describe the range operations, we use the functions of in the following.
However, these functions should be solved, otherwise the unknown is usually returned as the result.
Given two ranges r 1 and r 2 , r
1. If s
Assuming r 2 ' r 1 (otherwise use r
ffl Union operation. If (l 2 ? cannot be combined into one range.
Otherwise, assuming that r 1 and r 2 are both
valid. If it is unknown at this moment whether both are valid, we do not combine them.
2. If s is a known constant value, we do the following:
If (l divisible by c, then we use the formulas in case 1 to compute the intersection, difference
and union. Otherwise, r 1 "r . The union r 1 [r 2 usually cannot be combined into
one range and must be maintained as a list of ranges. For the special case that jl 1 \Gammal
and
3. If s (which may be symbolic expressions),
then we use the formulas in case 1 to perform the intersection, difference and union.
4. If s 1 is divisible by s 2 , we check to see if r 2 covers r 1 . If so, we have r
5. In all other cases, the result of the intersection is marked as unknown. The difference is kept in a
difference list at the level of the GARWD's, and the union remains a list of the two ranges.
4.3 Extensions to Recursive Calls and Dynamic Arrays
Programming languages such as Fortran 90 and C permit recursive procedure calls and dynamically
allocated data structures. In this subsection, we briefly discuss how array data-flow analysis can be
performed in the presence of recursive calls and dynamic arrays.
Recursive calls can be treated in array data-flow analysis essentially the same way as in array data
dependence analysis [30]. A recursive procedure calls itself either directly or indirectly, which forms
cycles in the call graph of the whole program. A proper order must be established for the traversal of
the call graph. First, all maximum strongly-connected components (MSC's) must be identified in the call
graph. Each MSC is then reduced to a single condensed node and the call graph is reduced to an acyclic
graph. Array data flow is then analyzed by traversing the reduced graph in a reversed topological order.
When a condensed node (i.e. an MSC) is visited, a proper order is established among all members in the
MSC for an iterative traversal. For each member procedure, the sets of modified and used array regions
(with guards) that are visible to its callers must be summarized respectively, by iterating over calling
cycles. If the MSC is a simple cycle, which is a common case in practical programs, the compiler can
determine whether the visible array regions of each member procedure grow through recursion or not,
after analyzing that procedure twice. If a region grows in a certain array dimension during recursive
calls, then a conservative estimate should be made for that dimension. In the worst case, for example,
the range of modification or use in that array dimension can be marked as unknown. A more complex
MSC requires a more complex traversal order [30].
Dynamically allocated arrays can be summarized essentially the same way as static arrays. The main
difference is that, during the backward propagation of array regions (with guards) through the control
flow graph, i.e. the HSG in this paper, if the current node contains a statement that allocates a dynamic
array, then all UE sets and MOD sets for that array are killed beyond this node.
The discussion above is based on the assumption that no true aliasing exists in each procedure, i.e.,
references to different variable names must access different memory locations if either reference is a write.
This assumption is true for Fortran 90 and Fortran 77 programs, but may be false for C programs. Before
performing array data-flow analysis on C programs, alias analysis must first be performed. Alias analysis
has been studied extensively in recent literature [8, 11, 14, 27, 28, 39, 44, 45].
5 Effectiveness and Efficiency
In this section, we first discuss how GAR's are used for array privatization and loop parallelization. We
then present experimental results to show the effectiveness and efficiency of array data-flow analysis.
5.1 Array Privatization and Loop Parallelization
An array A is a privatization candidate in a loop L if its elements are overwritten in different iterations
of L (see [29]). Such a candidacy can be established by examining the summary array MOD i set: If
the intersection of MOD i
and MOD!i is nonempty, then A is a candidate. A privatization candidate is
privatizable if there exist no loop-carried flow dependences in L. For an array A in a loop L with an
index I , if MOD !i " UE there exists no flow dependence carried by loop L.
Let us look at Figure 2(c) again. UE
so A is privatizable within loop I . As another
example, let us look at Figure 2(b). Since MOD i is not loop-variant, we have MOD
!i is not empty and array A is a privatization candidate. Furthermore,
The last difference operation above can be easily done because GAR [T is in the difference
list. Therefore, UE i " MOD !i is empty. This guarantees that array A is privatizable.
As we explained in Section 2.1, copy-in and copy-out statements sometimes need to be inserted in
order to preserve program correctness. The general rules are (1) upwardly exposed array elements must
be copied in; and (2) live array elements must be copied-out. We have already discussed the determination
of upwardly exposed array elements. We currently perform a conservative liveness analysis proposed in
[29].
The essence of loop parallelization is to prove the absence of loop-carried dependences. For a given
DO loop L with index I , the existence of different types of loop-carried dependences can be detected in
the following order:
ffl loop-carried flow dependences: They exist if and only if UE
loop-carried output dependences: They exist if and only MOD
loop-carried anti- dependences: Suppose we have already determined that there exist no
loop-carried output dependences, then loop-carried anti-dependences exist if and only if UE
MOD ?i 6= ;. (If loop-carried anti-dependences were to be considered separately, then UE i in the
above formula should be replaced by DE i , where DE i stands for the downwardly exposed use set
of iteration i.)
Take output dependences for example. In Figure 7(a), MOD i
of DO loop I2 contains a single GAR:
contains
Loop-carried output dependences do not exist for DO
loop I2 because MOD In contrast, for DO loop I1, MOD i
contains [True,
Loop-carried output
dependences exist for DO loop I1 because MOD i " MOD !i 6= ;. Note that if an array is privatized,
then no loop-carried output dependences exist between the write references to private copies of the same
array.
5.2 Experimental Results
We have implemented our array data-flow analysis in a prototyping parallelizing compiler, Panorama,
which is a multiple pass, source-to-source Fortran program analyzer [35]. It roughly consists of the phases
of parsing, building a hierarchical supergraph (HSG) and the interprocedural scalar UD/DU chains [1],
performing conventional data dependence tests, array data-flow analysis and other advanced analyses,
and parallel code generation.
Table
1 shows the Fortran loops in the Perfect benchmark suite which should be parallelizable after
array privatization and after necessary transformations such as induction variable substitution, parallel
reduction, and event synchronization placement. This table also marks which loops require symbolic
analysis, predicate analysis and interprocedural analysis, respectively. (The details of privatizable arrays
in these loops can be found in [18].)
Columns 4 and 5 mark those loops that can be parallelized by Polaris (Version 1.5) and by Panorama,
respectively. Only one loop (interf/1000) is parallelized by Polaris but not by Panorama, because one
of the privatizable arrays is not recognized as such. To privatize this array requires implementation of
a special pattern matching which is not done in Panorama. On the other hand, Panorama parallelizes
several loops that cannot be parallelized by Polaris.
Table
2 compares the speedup of the programs selected from Table 1, parallelized by Polaris and by
Panorama, respectively. Only those programs parallelizable by either or both of the tools are selected.
The speedup numbers are computed by dividing the real execution time of the sequential codes divided
by the real execution time of the parallelized codes, running on an SGI Challenge multiprocessor with
four 196MHZ R10000 CPU's and 1024 MB memory. On average, the speedups are comparable between
Polaris-parallelized codes and Panorama-parallelized codes. Note that the speedup numbers may be
Table
1: Parallelizable Loops in the Perfect benchmark suite and the Required Privatization Techniques
Program Routine SA PA IA Parallel
OCEAN
Total 80% 32% 80% 7
SA: Symbolic Analysis. PA: Predicate Analysis. IA: Interprocedural Analysis.
further improved by a number of recently discovered memory-efficiency enhancement techniques. These
techniques are not implemented in the versions of Polaris and Panorama used for this experiment.
Table
3 shows wall-clock time spent on the main parts of Panorama. In Table 3, "Parsing time" is
the time to parse the program once, although Panorama currently parses a program three times (the first
time for constructing the call graph and for rearranging the parsing order of the source files, the second
time for interprocedural analysis, and the last time for code generation.)
The column "HSG & DOALL Checking" is the time taken to build the HSG, UD/DU chains, and
conventional DOALL checking. The column "Array Summary" refers to our array data-flow analysis
which is applied only to loops whose parallelizability cannot be determined by the conventional DOALL
tests.
Figure
8 shows the percentage of time spent by the array data-flow analysis and the rest of
Panorama. Even though the time percentage of array data-flow analysis is high (about 38% on average),
the total execution time is small (31 seconds maximum). To get a perspective of the overhead of our
Table
2: Speedup Comparison between Polaris and Panorama (with 4 R10000 CPU's).
Program Speedup by Polaris Speedup by Panorama
ADM 1.1 1.5
MDG 2.0 1.5
BDNA 1.2 1.2
OCEAN 1.2 1.7
ARC2D 2.1 2.2
TRFD 2.2 2.1
interprocedural analysis, the last column, marked by "f77 -O", shows the time spent by the f77 compiler
with option -O to compile the corresponding Fortran program into sequential machine code.
Table
4 lists the analysis time of Polaris alongside of that of Panorama (which includes all three times
of parsing instead of just one as in Table 3). It is difficult to provide an absolutely fair comparison. So,
these two sets of numbers are listed together to provide a perspective. The timing of Polaris (Version
1.5) is measured without the passes after array privatization and dependence tests. (We did not list the
timing results of SUIF, because SUIF's current public version does not perform array data-flow analysis
and no such timing results are publically available.) Both Panorama and Polaris are compiled by the
GNU gcc/g++ compiler with the -O optimization level. The time was measured by gettimeofday() and
is elapsed wall-clock time. When using a SGI Challenge machine, which has a large memory, the time
gap between Polaris and Panorama is reduced. This is probably because Polaris is written in C++ with
a huge executable image. The size of its executable image is about 14MB, while Panorama, written in
C, has an executable image of 1.1MB. Even with a memory size as large as 1GB, Panorama is still faster
than Polaris by one or two orders of magnitude.
5.3 Summary vs. In-lining
We believe that several design choices contribute to the efficiency of Panorama. In the next subsections,
we present some of these choices made in Panorama.
The foremost reason seems to be that Panorama computes interprocedural summary without in-lining
the routine bodies as Polaris does. If a subroutine is called in several places in the program, in-lining
causes the subroutine body to be analyzed several times, while Panorama only needs to summarize
each subroutine once. The summary result is later mapped to different call sites. Moreover, for data
dependence tests involving call statements, Panorama uses the summarized array region information,
while Polaris performs data dependences between every pair of array references in the loop body after
in-lining. Since the time complexity of data dependence tests is O(n 2 ), where n is the number of
individual references being tested, in-lining can significantly increase the time for dependence testing.
In our experiments with Polaris, we limit the number of in-lined executable statements to 50, a default
Table
3: Analysis Time (in seconds) Distribution 1
Program Parsing HSG & Tradi. Array Code Total F77 -O
Analysis Summary Generation
ADM 3.63 12.68 11.76 3.53 31.60 54.1
QCD 1.04 3.71 3.04 1.22 9.01 20.3
MDG 0.82 2.58 2.11 0.77 6.28 12.3
BDNA 2.41 7.41 3.80 2.45 16.06 45.2
OCEAN 1.37 8.49 3.31 1.35 14.53 39.3
DYFESM 3.77 6.04 2.26 2.48 14.56 20.2
MG3D 1.67 7.46 14.87 1.70 25.71 34.0
ARC2D 2.46 6.24 10.14 1.96 20.81 37.7
TRFD 0.54 0.70 0.48 0.18 1.90 7.2
Total 24.56 72.1 70.31 20.37 187.38 349.8
1: Timing is measured on SGI Indy workstations with 134MHz MIPS R4600 CPU and 64
MB memory.
Table
4: Elapsed Analysis Time (in seconds)
Program #Lines excl. SGI Challenge SGI Indy 2
comments Panorama Polaris Panorama Polaris
ADM 4296 17.03 435 38.80 2601
MDG 935 3.02 123 7.90 551
OCEAN 1917 8.70 333 18.2 1801
TRFD 417 1.05 62 2.98 290
1 SGI Challenge with 1024MB memory and 196MHZ R10000
CPU. 2 SGI Indy with 134MHz MIPS R4600 CPU and 64
MB memory. 3 '*' means Polaris takes longer than four
hours.
ADM QCD MDG TRACK BDNA OCEAN DYFESM MG3D ARC2D FLO52 TRFD SPEC77 Total
The rest
Summary
Figure
8: Time percentage of array data-flow summary
value used by Polaris. With this modest number, data dependence tests still account for about 30% of
the total time.
We believe that another important reason for Panorama's efficiency is its efficient computation and
propagation of the summary sets. Two design issues are particularly noteworthy, namely, the handling
of predicates and the difference set operations. Next, we discuss these issues in more details.
5.4 Efficient Handling of Predicates
General predicate operations are expensive, so compilers often do not perform them. In fact, the majority
of predicate-handling required for our array data-flow analysis involves simple operations such as checking
to see if two predicates are identical, if they are loop-independent, and if they contain indices and affect
shapes or sizes of array regions. These can be implemented rather efficiently.
A canonical normal form is used to represent the predicates. Pattern-matching under a normal form
is easier than under arbitrary forms. Both the conjunctive normal form (CNF) and the disjunctive
normal form (DNF) have been widely used in program analysis [7, 9]. These cited works show that
negation operations are expensive with both CNF and DNF. This fact was also confirmed by our previous
experiments using CNF [20]. Negation operations occur not only due to ELSE branches, but also due
to GAR and GARWD operations elsewhere. Hence, we design a new normal form such that negation
operations can often be avoided.
We use a hierarchical approach to predicate handling. A predicate is represented by a high level
predicate tree, PT (V; E; r), where V is the set of nodes, E is the set of edges, and r is the root of
PT . The internal nodes of V are NAND operators except for the root, which is an AND operator. The
leaf nodes are divided into regular leaf nodes and negative leaf nodes. A regular leaf node represents
AND
negative leaf
operator
regular leaf
Figure
9: High level representation of predicates
a predicate such as an IF condition, while a negative leaf node represents the negation of a predicate.
Theoretically, this representation is not a normal form because two identical predicates may have different
predicate trees, which may render pattern-matching unsuccessful. We, however, believe that such cases
are rare and that they happen only when the program is extremely complicated. Figure 9 shows a PT .
Each leaf (regular or negative) is a token which represents a basic predicate such as an IF condition or
a DO condition in the program. At this level, we keep a basic predicate as a unit and do not split it.
The predicate operations are based only on these tokens and do not check the details within these basic
predicates. Negation of a predicate tree is simple this way. A NAND operation, shown in Figure 10,
may either increase or decrease by one level in a predicate tree according to the shape of the predicate
tree. If there is only one regular leaf node (or one negative leaf node) in the tree, the regular leaf node is
simply changed to a negative leaf node (or vice versa). AND and OR operations are also easily handled,
as shown in Figure 10. We use a unique token for each basic predicate so that simple and common cases
can be easily handled without checking the contents of the predicates. The content of each predicate is
represented in CNF and is examined when necessary.
Table
5 lists several key parameters, the total number of arrays summarized, the average length of a
MOD set (column "Ave # GAR's''), the average length of a UE set (column ``Ave # GARWD's"), and
some data concerning difference and predicate operations. The total number of arrays summarized given
in the table is the sum of the number of arrays summarized in each loop nest, and an array that appears
in two disjoint loop nests is counted twice. Since the time for set operations is proportional to the square
of the length of MOD and UE lists, it is important that these lists are short. It is encouraging to see that
they are indeed short in the benchmark application programs.
Columns 7 and 8 (marked "High" and "Low") in Table 5 show that over 95% of the total predicate
operations are the high level ones, where a negation or a binary predicate operation on two basic predicates
is counted as one operation. These numbers are dependent on the strategy used to handle the predicates.
Currently, we defer the checking of predicate contents until the last step. As a result, only a few low level
AND
A
AND
Negation, increase by 1
Negation, decrease by 1
AND AND
AND
AND
AND AND
AND
(a)
(b)
(c)
Figure
10: Predicate operations
Table
5: Measurement of Key Parameters
Program # Array Ave # Ave # Difference Ops # Predicate Ops
Summarized GAR's GARWD's Total Reduced High Low
QCD 414 1.41 1.27 512 41 4803 41
BDNA 285 1.27 1.43 267 3 3805 4
OCEAN 96 1.72 1.53 246 19 458 36
MG3D 385 2.79 2.62 135
Total 4011 1.55 1.49 3675 314 42391 618
predicate operations are needed. Our results show that this strategy works well for array privatization,
since almost all privatizable arrays in our tested programs can be recognized. Some cases, such as
those that need to handle guards containing loop indices, do need low level predicate operations. The
hierarchical representation scheme serves well.
Reducing Unnecessary Difference Operations
We do not solve the difference of T
using the general formula presented in Section 2 unless the result
is a single GAR. When the difference cannot be simplified to a single GAR, the difference is represented
by a GARWD instead of by a union of GAR's, as implied by that formula. This strategy postpones the
expensive and complex difference operations until they are absolutely necessary, and it avoids propagating
a relatively complex list of GAR's. For example, let a GARWD G 1
be
and G 2
be m). We have
OE, and two difference operations represented in G 1
are reduced
(i.e., there is no need to perform them). In Table 5, the total number of difference operations and the
total number of reduced difference operations are illustrated in columns 5 and 6, respectively. Although
difference operations are reduced by only about 9% on average, the reduction is dramatic for some
programs: it is by one third for MDG and by half for MG3D.
Let us use the example in Figure 2(b) to further illustrate the significance of delayed difference
operations. A simplified control flow graph of the body of the outer loop is shown in Figure 11. Suppose
1st DO J NOT p THEN branch 2nd DO J
Figure
11: The HSG of the Body of the Outer Loop for Figure
that each node has been summarized and that the summary results are listed below:
Following the description given in Section 3.4, we will propagate the summary sets of each node in the
following steps to get the summary sets for the body of the outer loop.
1. MOD
2. MOD
This difference operation is kept in the GARWD and will be reduced at step 4.
3. MOD
In the above, p is inserted into the guards of the GAR's, which are propagated through the TRUE
edge, and p is then inserted into the guards propagated through the FALSE edge.
4. MOD
At this step, the computation of UE IN (p1 ) removes one difference operation because (f[p; (jlow :
is equal to ;. In other words, there is no need to perform
the difference operation represented by GARWD ?g. An advantage
of the GARWD representation is that a difference can be postponed rather than always performed.
using a GARWD, the difference operation at step 2 always has to be performed, which
should not be necessary and which thus increases execution time.
Therefore, the summary sets of the body of the outer loop (DO I) should be:
To determine if array A is privatizable, we need to prove that there exists no loop-carried flow
dependence for A. We first calculate MOD !i , the set of array elements written in iterations prior to
iteration i, giving us MOD . The intersection of MOD !i and UE i is conducted by two
intersections, each of which is formed on one mod component from MOD !i and UE i respectively. The
first mod, [T appears in the difference list of UE i , and thus the result is obviously empty.
Similarly, the intersection of [p; (jmax)] and the second mod, [p; (jmax)], is empty because their guards
are contradictory. Because the intersection of MOD !i and UE i is empty, array A is privatizable. In
both intersections, we avoid performing the difference operation in UE i , and therefore improve efficiency.
6 Related Work
There exist a number of approaches to array data-flow analysis. As far as we know, no work has
particularly addressed the efficiency issue or presented efficiency data. One school of thought attempts to
gather flow information for each array element and to acquire an exact array data-flow analysis. This is
usually done by solving a system of equalities and inequalities. Feautrier [17] calculates the source function
to indicate detailed flow information. Maydan et al. [33, 34] simplify Feautrier's method by using a Last-
Write-Tree(LWT). Duesterwald et al. [12] compute the dependence distance for each reaching definition
within a loop. Pugh and Wonnacott [37] use a set of constraints to describe array data-flow problems
and solve them basically by the Fourier-Motzkin variable elimination. Maslov [32], as well as Pugh
and Wonnacott [37], also extend the previous work in this category by handling certain IF conditions.
Generally, these approaches are intraprocedural and do not seem easily extended interprocedurally. The
other group analyzes a set of array elements instead of individual array elements. Early work uses regular
sections [6, 24], convex regions [40, 41], data access descriptors [2], etc. to summarize MOD/USE sets of
array accesses. They are not array data-flow analyses. Recently, array data-flow analyses based on these
sets were proposed (Gross and Steenkiste [19], Rosene [38], Li [29], Tu and Padua [43], Creusillet and
Irigoin [10], and M. Hall et al. [21]). Of these, ours is the only one using conditional regions (GAR's),
even though some do handle IF conditions using other approaches. Although the second group does
not provide as many details about reaching-definitions as the first group, it handles complex program
constructs better and can be easily performed interprocedurally.
Array data-flow summary, as a part of the second group mentioned above, has been a focus in the
parallelizing compiler area. The most essential information in array data-flow summary is the upwardly
exposed use set. These summary approaches can be compared in two aspects: set representation and path
sensitivity. For set representation, convex regions are highest in precision, but they are also expensive
because of their complex representation. Bounded regular sections (or regular sections) have the simplest
representation, and thus are most inexpensive. Early work tried to use a single regular section or a single
convex region to summarize one array. Obviously, a single set can potentially lose information, and it
may be ineffective in some cases. Tu and Padua [43], and Creusillet and Irigoin [10] seem to use a single
regular section and a single convex region, respectively. M. Hall et al. [21] use a list of convex regions to
summarize all the references of an array. It is unclear if this representation is more precise than a list of
regular sections, upon which our approach is based.
Regarding path sensitivity, the commonality of these previous methods is that they do not distinguish
summary sets of different control flow paths. Therefore, these methods are called path-insensitive, and
have been shown to be inadequate in real programs. Our approach, as far as we know, is the only
path-sensitive array data-flow summary approach in the parallelizing compiler area. It distinguishes
summary information from different paths by putting IF conditions into guards. Some other approaches
do handle IF conditions, but not in the context of array data-flow summary.
7 Conclusion
In this paper, we have presented an array data-flow analysis which handles interprocedural, symbolic, and
predicate analyses all together. The analysis is shown via experiments to be quite effective for program
parallelization. Important design decisions are made such that the analysis can be performed efficiently.
Our hierarchical predicate handling scheme turns out to serve very well. Many predicate operations can
be performed at high levels, avoiding expensive low-level operations. The new data structure, GARWD
(i.e. guarded array regions with a difference list), reduces expensive set-difference operations by up to
50% for a few programs, although the reduction is unimpressive for other programs. Another important
finding is that the MOD lists and the UE lists can be kept rather short, thus reducing set operation time.
As far as we know, this is the first time the efficiency issue has been addressed and data presented for
such a powerful analysis. We believe it is important to continue exploring the efficiency issue, because
unless interprocedural array data-flow analysis can be performed reasonably fast, its adoption in real
programming world would be unlikely. With continued advances of parallelizing compiler techniques,
we hope that fully or partially automatic parallelization will provide a viable methodology for machine-independent
parallel programming.
--R
A mechanism for keeping useful internal information in parallel programming tools: The data access descriptor.
The Perfect club benchmarks: Effective performance evaluation of supercomputers.
Parallel Programming with Polaris.
Symbolic analysis techniques needed for the effective parallelization of Perfect benchmarks.
Analysis of interprocedural side effects in a parallel programming environment.
Efficient flow-sensitive interprocedural computation of pointer-induced aliases and side effects
Applications of symbolic evaluation.
Interprocedural array region analyses.
Interprocedural may-alias analysis for pointers: Beyond k-limiting
A practical data-flow framework for array reference analysis and its use in optimizations
On the automatic parallelization of the perfect benchmarks.
Experience in the automatic parallelization of four perfect- benchmark programs
On the automatic parallelization of the Perfect Benchmarks.
Dataflow analysis of array and scalar references.
Structured data-flow analysis for arrays and its use in an optimizing compiler
Symbolic array dataflow analysis for array privatization and program parallelization.
Interprocedural analysis for parallelization.
Symbolic dependence analysis for parallelizing compilers.
Maximizing multiprocessor performance with the SUIF Compiler.
An implementation of interprocedural bounded regular section analysis.
Semantical interprocedural parallelization: An overview of the PIPS project.
The Structure of Computers and Computations
A safe approximate algorithm for interprocedural pointer aliasing.
Interprocedural modification side effect analysis with pointer aliasing.
Array privatization for parallel execution of loops.
Interprocedural analysis for parallel computing.
Program parallelization with interprocedural analysis.
Lazy array data-flow dependence analysis
Array data-flow analysis and its use in array privatization
Accurate Analysis of Array References.
An interprocedural parallelizing compiler and its support for memory hierarchy research.
A precise interprocedural data-flow algorithm
An exact method for analysis of value-based array data dependences
Incremental dependence analysis.
Direct parallelization of CALL statements.
Interprocedural analysis for program restructuring with parafrase.
Gated ssa-based demand-driven symbolic analysis for parallelizing compilers
Automatic array privatization.
Efficient context-sensitive pointer analysis for C programs
Program decomposition for pointer aliasing: A step towards practical analyses.
--TR
--CTR
Array resizing for scientific code debugging, maintenance and reuse, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.32-37, June 2001, Snowbird, Utah, United States
Thi Viet Nga Nguyen , Franois Irigoin, Efficient and effective array bound checking, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.3, p.527-570, May 2005 | interprocedural analysis;parallelizing compiler;array data-flow analysis;symbolic analysis |
336423 | Analyzing bounding boxes for object intersection. | Heuristics that exploit bouning boxes are common in algorithms for rendering, modeling, and animation. While experience has shown that bounding boxes improve the performance of these algorithms in practice, the previous theoretical analysis has concluded that bounding boxes perform poorly in the worst case. This paper reconciles this discrepancy by analyzing intersections among n geometric objects in terms of two parameters: &agr; an upper bound on the aspect ratio or elongatedness of each object; and &sgr; an upper bound on the scale factor or size disparity between the largest and smallest objects. Letting Ko and Kb be the number of intersecting object pairs and bounding box pairs, respectively, we analyze a ratio measure of the bounding boxes' efficiency, r=Kb/n+Ko . The analysis proves that r=Oas log2s and r=Was . One important consequence is that if &agr; and &sgr; are small constants (as is often the case in practice), then Kb= O(Ko)+ O(n, so an algorithm that uses bounding boxes has time complexity proportional to the number of actual object intersections. This theoretical result validates the efficiency that bounding boxes have demonstrated in practice. Another consequence of our analysis is a proof of the output-sensitivity of an algorithm for reporting all intersecting pairs in a set of n convex polyhedra with constant &agr; and &sgr;. The algorithm takes time O(nlogd1n+ Kolog d1n) for dimension 3. This running time improves on the performance of previous algorithms, which make no assumptions about &agr; and &sgr;. | Introduction
Many computer graphics algorithms improve their performance by using bounding
boxes. The bounding box of a geometric object is a simple volume that encloses
the object, forming a conservative approximation to the object. The most common
form is an axis-aligned bounding box, whose extent in each dimension of the space is
bounded by the minimum and maximum coordinates of the object in that dimension.
(See
Figure
1 (a) for an example.)
Bounding boxes are useful in algorithms that should process only objects that
intersect. Two objects intersect only if their bounding boxes intersect, and intersection
testing is almost always more efficient for objects' bounding boxes than for
the objects themselves. Thus, bounding boxes allow an algorithm to quickly perform
a "trivial reject" test that prevents more costly processing in unnecessary cases.
This heuristic appears in algorithms for rendering, from traditional algorithms for
visible-surface determination [10], to algorithms that optimize clipping through view-frustum
culling [13], and recent image-based techniques that reconstruct new images
from the reprojected pixels of reference images [23]. Bounding boxes are also common
in algorithms for modeling, from techniques that define complex shapes as Boolean
combinations of simpler shapes [18] to techniques that verify the clearance of parts
in an assembly [11]. Animation algorithms also exploit bounding boxes, especially
collision detection algorithms for path planning [21] and the simulation of physically
based motion [2, 20, 25].
While empirical evidence demonstrates that the bounding box heuristic improves
performance in practice, the goal of proving that bounding boxes maintain high performance
in the worst case has remained elusive. Such a proof is important to reassure
practitioners that their application will not be the one in which bounding boxes happen
to perform poorly. To understand the difficulties in such a proof, consider the
use of bounding boxes when detecting pairs of colliding objects from a set, S, of n
polyhedra. Let K o be the number of colliding pairs of objects, and let K b be the
number of colliding pairs of bounding boxes. Figure 1 (b) shows an example in which
meaning that the bounding box heuristic adds only
unnecessary overhead, and a collision detection algorithm that uses the heuristic is
slower than one that naively tests every pair of objects for collision.
Intuitively, the poor performance in this example is due to the pathological shapes
of the objects in S. In this paper, we identify two natural measures of the degree to
which objects' shapes are pathological, and we analyze the bounding box heuristic in
terms of these measures. We show that if the aspect ratio, ff, and scale factor, oe, are
bounded by small constants (as is generally the case in practice) then the bounding
box heuristic avoids poor performance in the worst case.
The aspect ratio measures the elongatedness of an object. In classical geometry,
the aspect ratio of a rectangle is defined as the ratio of its length to its width. This
definition can be extended in a variety of ways to general objects and dimensions
(a) (b)
Figure
1: (a) A polygonal object and its axis-aligned bounding box. (b)
An example with K
greater than two. It is often defined as the ratio between the volumes of the smallest
ball enclosing the object and the largest ball contained in the object. We will find it
convenient to use the volumes of L1-norm balls in the d-space. 1 Given a solid object
P in d-space, let b(P ) denote the smallest L1 ball containing P , and let c(P ) denote
the largest L1 ball contained in P . The aspect ratio of P is defined as
where vol(P ) denotes the d-dimensional volume of P . We will call b(P ) the enclosing
box , and c(P ) the core of P . Thus, the aspect ratio measures the volume of the
enclosing box relative to the core. For a set of objects the
aspect ratio is the smallest ff such that ff - ff(P i ), for
The scale factor for a set of objects measures the disparity between the largest
and smallest objects. For a set of objects in d-space, we say
that S has scale factor oe if, for all 1 -
The analysis in this paper focuses on the ratio
In two dimensions, for instance, the L1 ball of radius r and center o is the axis-aligned square
of side length 2r, with center o. The choice of the norm affects only the small dimension-dependent
constant factors, and our results apply also to L 2 balls or other commonly used norms with small
changes in the constant. Note also that the estimates derived with the L1 norm are the most
pessimistic since L1 box is the most conservative bounding volume.
where K o is the number of object pairs in S with nonempty intersection, and K b is the
number of object pairs whose enclosing boxes intersect. 2 The denominator represents
the best-case work for an algorithm using the bounding box heuristic, so the ratio can
be seen as the relative performance measure of the heuristic. Ideally, this ratio would
be a small constant. Unfortunately, the pathological case of Figure 1 (b) shows that
without any assumptions on ff and oe, we can have ae = \Omega\Gamma n). However, if we include
aspect ratio and scale factors in the analysis, we can prove the following theorem,
which is the main result of our paper.
Theorem 1.1 Let S be a set of n objects in d dimensions, with aspect bound ff and
scale factor oe, where d is a constant. Then,
oe log 2 oe). Asymptotically,
this bound is almost tight, as we can show a family S achieving
oe).
There are two main implications of this theorem. First, it provides a theoretical
justification for the efficiency that the bounding box heuristic shows in practice. In
most applications, ff and oe are small constants, so ae is also constant. The theorem
then indicates that K O(n). An algorithm that uses the bounding box
heuristic is thus nearly optimal in the asymptotic sense: it does not waste time
processing bounding box intersections because their number grows no faster than the
number of actual object intersections (plus an O(n) factor which matches the overhead
the algorithm must incur if it does anything to each object). Poor performance
requires uncommon situations in which ff
n), as in Figure 1 (b). The theorem
also shows that performance is affected more by the aspect ratio than the scale factor,
so it may be worthwile to decompose irregularly-shaped objects into more regular
pieces to reduce the aspect ratio.
The second implication of the theorem is an output-sensitive algorithm for reporting
all pairs of intersecting objects in a set of n convex polyhedra in two or three
dimensions. By using the bounding box heuristic, the algorithm can report the K
pairs of intersecting polyhedra in O(n log
m) time, for
is the maximum number of vertices in a polyhedron. (We assume
that each polyhedron has been preprocessed in linear time for efficient pairwise intersection
detection [6].) Without the aspect and scale bounds, we are not aware of
any output-sensitive algorithm for this problem in three dimensions. Even in two
dimensions, the best algorithm for finding all intersecting pairs in a set of n convex
polygons takes O(n 4=3 and oe are constants, as is common in
practice, then the algorithm runs in time O(n log
which is nearly optimal.
2 Notice that the L1 ball is a more conservative estimate than the axis-aligned bounding box and
so K b
is an upper bound on the number of bounding box intersections.
Related Work
The use of the bounding box heuristic in collision detection algorithms is representative
of its use in other algorithms. Thus, our analysis focuses on collision detection,
but we believe that our results extend to other applications.
Most collision detection algorithms that use bounding boxes can be considered
as having two phases, which we call the broad phase and narrow phase. The basic
structure of the algorithms is as follows:
ffl Broad phase: find all pairs of intersecting bounding boxes.
ffl Narrow phase: for each intersecting pair found by the broad phase, perform a
detailed intersection test on the corresponding objects.
The broad and narrow phases have distinct characteristics, and often have been
treated as independent problems for research.
Efficient algorithms for the broad phase must avoid looking at all O(n 2 ) pairs of
bounding boxes, and they do so by exploiting the specialized structure of bounding
boxes. Edelsbrunner [8] and Mehlhorn [24] describe provably efficient algorithms for
axis-aligned bounding boxes in d-space, algorithms that find the k intersecting pairs
in O(n log d\Gamma1 n+ time and O(n log d\Gamma2 n) space. A variety of heuristic methods are
used in practice [2, 19], and empirical evidence suggest that these algorithms perform
well; the "sweep-and-prune" algorithm implemented in the I-COLLIDE package of
Cohen et al. [2] currently appears to be the method of choice. It might seem desirable
to use a broad phase the replaces axis-aligned bounding boxes with objects' convex
hulls, which provide a tighter form of bound. Unfortunately, no provably efficient
algorithm is known for finding the intersections between n convex polyhedra in three
dimensions. In two dimensions, though, a recent algorithm of Gupta et al. [14] can
report the intersecting pairs of convex polygons in time O(n 4=3
The narrow phase solves the problem of determing the contact or interpenetration
between two objects. Thus, the performance of a narrow phase algorithm does not
depend on n, the number of objects in the set, but rather on the complexity of
each object. If the objects are convex polyhedra, then a method due to Dobkin and
Kirkpatrick [6] can decide whether two objects intersect in O(log d\Gamma1 m) time, where
m is the total number of edges in the two polyhedra, and d - 3 is the dimension.
This algorithm preprocesses the polyhedra in a separate phase that runs in linear
time. Using this preprocessing, one can also compute an explicit representation of
the intersection of two convex polyhedra in time O(m), as shown by Chazelle [1].
If only one of the objects in the pair is convex, then intersection detection can be
performed in time O(m log m) [5]. The problem is more difficult if both polyhedra are
nonconvex, and only recently has a subquadratic time algorithm been discovered for
deciding if two nonconvex polyhedra intersect [27]. This algorithm takes O(m 8=5+" )
time to determine the first collision between two polyhedra, one of which is stationary
and the other is translating. While the provable running times of these algorithms are
important results, they are primarily of theoretical interest because the algorithms
are too complicated to be practical. As an alternative, a variety of heuristic methods
have been developed that tend to work well in practice [12, 20]. These methods use
hierarchies of bounding volumes and tree-descent schemes to determine intersections.
Our analysis of the bounding box heuristic is related to the idea of "realistic input
models," which has become a topic of recent interest in computational geometry. In
a recent paper, de Berg et al. [4] have suggested classifying various models of realistic
input models into four main classes: fatness, density, clutter, and cover complexity.
Briefly, an object is fat if it does not have long and skinny parts; a scene has low
clutter if any cube not containing a vertex of an object intersects at most a constant
number of objects; a scene has low density if a ball of radius r intersects only a
constant number of objects whose minimum enclosing ball has radius at least r; the
cover complexity is a measure of the relative sparseness of an object's neighborhood.
One of the first nontrivial results in this direction is by Matou-sek et al. [22], who
showed that the union of n fat triangles has complexity O(n log log n), as opposed to
arbitrary triangles; a triangle is fat if its minimum angle exceeds ffi, for a
constant this result to show that the union of
convex objects has complexity O(n 1+" ) provided that each object is fat and each
pair of objects intersects only in a constant number of points. Additional results on
fat or uncluttered objects can be found in [3, 15, 29].
3 Analysis Overview
Our proof for the upper bound on ae consists of three steps. We first consider the case
of arbitrary ff but fixed oe (Section 4). Next, in Section 5, we allow both ff and oe to
be arbitrary but assume that there are only two kinds of objects: one with box sizes
ff and the other with box sizes ffoe (the two extreme ends of the scale factor). Finally,
in Section 6, we handle the general case, where objects can have any box size in the
range [ff; ffoe]. We first detail our proof for two dimensions, and then sketch how to
extend it to arbitrary dimensions in Section 7.
Arbitrary Aspect Ratio but Fixed Scale
We start by assuming that the set S has scale factor one, that is, the aspect
ratio bound ff can be arbitrary. (Any constant bound for oe will work for our proof;
we assume one for convenience. The most straightforward way to enforce this scale
bound is to make every object's enclosing box to be the same size.) We will show that
in this case ae(S) = O(ff). We describe our proof in two dimensions; the extension to
higher dimensions is quite straightforward, and is sketched in Section 7.
Without loss of generality, let us assume that each object P in S has vol(c(P
and vol(b(P Recall that a L1 box of volume ff in two dimensions is a square
of side length p
ff. We call this a size ff box. Consider a tiling of the plane by size
ff boxes that covers the portion of the plane occupied by the bounding boxes of the
objects, namely,
Figure 2. We will consider each box semi-open, so that
the boundary shared by two boxes belongs to the one on the left, or above. Thus,
each point of the plane belongs to at most one box.
Figure
2: Tiling of the plane by boxes of size ff. The unit size core for the
object in B 1 is also shown.
We assume an underlying unit lattice in the plane, and assign each object P to the
lexicographically smallest lattice point contained in P . (Such a point exists
because the core is closed and has volume at least one.) Let m(q) be the number of
objects assigned to a lattice point q, and let M i denote the total number of objects
assigned to the lattice points contained in a box B i . That is,
means that the lattice point q lies in the box B i . Since the boxes in the
tiling are disjoint, we have the equality
We will derive the bounds on K b
and K o in terms of M i .
Lemma 4.1 Given a set of objects S with aspect bound ff and scale bound oe = 1, let
denote a tiling by size ff boxes as defined above, and let M i denote
the total number of objects assigned to lattice points in B i , for
Proof. Consider an object P assigned to B i , and let P j be another object whose
box intersects b(P ). Suppose P j is assigned to the box B j . Since b(P
the L1 norm distance between the boxes B i and B j is at most 2
ff. This means that
is among the 24 boxes that lie within 2
ff wide corridor around B i .
Figure
3: A box, shown in dark at the center, and its 24 neighbors.
Suppose that the boxes are labeled in the row-major order-top
to bottom, left to right in each row. Assume that the number of columns in the box
tiling is k. Then, the preceding discussion shows that if the boxes of objects P i and
intersect and these objects are assigned to boxes B i and B j , then we must have
ck
where c; d 2 f\Gamma2; \Gamma1; 0; 1; 2g. (The box B j can be at most two rows and two columns
away from B i . For instance, the box preceding two rows and two columns from B i
is B i\Gamma2k\Gamma2 .) See Figure 3. The number of box pair intersections contributed by B i
and B j is clearly no more than M i M j . Thus, the total number of such intersections
is bounded by
where c; d 2 f\Gamma2; \Gamma1; 0; 1; 2g. Recalling that x 1 x 2 - 1(x 2
can bound the intersection count by
There are 5 possible values for c and d each, and so altogether 25 values for j for
each i. Since each index can appear once as the i and once as the j, we get that the
maximum number of intersections is at most
This completes the proof of the lemma. 2
Next, we establish a lower bound on the number of intersecting object pairs. We
will need the following elementary fact.
Lemma 4.2 Consider non-negative numbers a 1 ; a
a i
Proof. Let m denote the index for which the ratio a i =b i is maximized. Since
summing it over all i, we get
am
Dividing both sides by
completes the proof of the lemma. 2
Let us now focus on objects assigned to a box B i in our tiling. If L i is the number
of intersecting pairs among objects assigned to B i , then we have the following:
where the second to last inequality follows from the fact that
last
inequality follows from the preceding lemma. We will establish an upper bound on
the right hand side of this inequality by proving a lower bound on the denominator
term.
Fix a box B i in the following discussion, where 1 - i - p, and consider a lattice
point q in it. Since m(q) objects have q in common, at least
object pair
intersections are contributed by the objects assigned to q. (Observe that each object
is assigned to a unique lattice point, and so we count each intersection at most once.)
Thus, the total number of pairwise intersections L i among objects assigned to B i is
at least
We will show that the ratio 25M 2
never exceeds cff, where c is an absolute
constant. Considering M i fixed, this ratio is maximized when L i is minimized.
Lemma 4.3 Let x non-negative numbers that sum to z. The minimum
value of
is z(z \Gamma n)=2n, which is achieved when x
Proof. We observe the following equalities:
x i!
Thus,
is minimized when
iis minimized. Using Cauchy's Inequality
[16], the latter is minimized when x z=n. The lemma follows. 2
Since no square box of size ff can have more than 2dffe lattice points in it, we get
a lower bound on L i by setting
2dffe
, for all q. Thus,
Lemma 4.4
Proof. Using the bound for L
2dffe
100dffe:
This completes the proof. 2
Theorem 4.5 Let S be a set of n objects in the plane, with aspect bound ff and scale
5 Objects of two Fixed Sizes
In this section, we generalize the result of the previous section to the case where
objects come from the two extreme ends of the scale: their box size is either ff or ffoe.
To simplify our analysis, we will assume that
just use the next nearest powers of 4 as upper bounds for ff; oe.
In d dimensions, ff and oe are assumed to be integral powers of 2 d .)
Let us call an object large if its enclosing box has size ffoe, and small otherwise.
Clearly, there are only three kinds of intersections: large-large, small-small, and large-
small. Let K l
b and K sl
b , respectively, count these intersections for the enclosing
boxes. So, for example, K sl
b is number of pairs consisting of one large and one small
object whose boxes intersect. Similarly, define the terms K l
object
intersections. The ratio bound can now be restated as
K l
(1)
K l
K sl
where
We know from the result of the previous section that
K l
- cff, for some constant c. So, we only need to establish a bound on the
third ratio, K sl
K l
, which we do as follows.
Let us again tile the plane with boxes of volume ffoe. Call these boxes
. Underlying this tiling are two grids: a level oe grid, which divides the boxes into
cells of size oe, and a level 1 grid, which divides the boxes into cells of size 1. The
level oe grid has vertices at coordinates (i
oe), while the finer grid has vertices
at coordinates (i; j), for integers j. The level oe grid is used to reason about large
objects, while the level 1 grid is used for small objects. We will mimic the proof of
the previous section, and assign objects of each class to an appropriate box. In order
to do that, we need to define subboxes of size ff within each size ffoe original box.
s
Figure
4: The box on the left shows large grid, and the one on the right
shows small grid as well as the subboxes. In this figure,
Consider a large box B i . The level oe grid partitions B i into ff boxes of volume oe
each. Next, we also partition B i into oe subboxes, each of volume ff. Since
these subboxes are perfectly aligned with both the level
1 and level oe grids. (Along a side of B i , the oe grid has vertices at distance multiples
of
while the vertices of the subboxes lie at distance multiples of
We label the oe subboxes within B row major order. Figure 4
illustrates these definitions, by showing two boxes side by side.
Now, each member of the large object set (resp. small object set) contains at
least one grid point of the large (resp. small) grid. Just as in the previous section, we
assign each object to a unique grid point (say, the one with lexicographically smallest
coordinates). Let X i denote the number of large objects assigned to all the grid points
in B i . Let y ij , for oe, denote the number of small objects assigned to the
to be the total number of small objects assigned
to level one grid points in B i .
We estimate an upper bound on K sl
b and a lower bound on K sl
in terms of X i
and Y i . Fix a box B i . The enclosing box of a large object P i , assigned to B i , can
intersect the box of a small object P j , assigned to B j , only if B j is one of the 25
neighbors of B i (including itself) that form the two layers of boxes around B i . (See
Figure
i be the box with a maximum number of small objects among
the 25 neighbors of B i , and let Y m
i be the count of the small objects in B m
. That is,
is one of 25 neighbors of B i g, and B m
i is the box corresponding
to Y m
. Then, we have the following upper bound:
K sl
Next, we estimate lower bounds on the number of object pair intersections. Let
the number of object pair intersections among the large objects assigned to
denote the object pair intersections among the small objects assigned
to B i . Since there are only ff grid points for the large objects in B i , by Lemma 4.3,
we have
ff
Similarly, each of the subboxes points of the
level 1 grid. Thus, we also have
oe
y
ff
In deriving our bound, we will use the conservative estimate of
only count the intersections between two large or two small objects. We
also use the notation S m
i for the number of object-pair intersections among the small
objects assigned to B m
i . We have the following inequalities:
K sl
where the second inequality follows from the fact that
the third
follows from the fact that a particular
can contribute the Y m
i term to at most
its 25 neighbors; and the final inequality follows from Lemma 4.2. The remaining step
of the proof now is to show that the above inequality is O(ff
oe). First, by summing
up the terms in Eqs. (2) and (3), we observe the following:
where recall that
. Thus, we have
where once again Cauchy's inequality is invoked to show that
oe
It can be easily shown that this ratio is at most 2ff p
oe, as follows. If Y m
then we have
oe:
and we have
oe:
This shows that K sl
oe): Combining this with Ineq. (2), we get the
desired result, which is stated in the following theorem.
Theorem 5.1 Suppose S is a set of n objects in the plane, such that each object has
aspect ratio at most ff, and the enclosing box of each object has size either ff or ffoe.
oe).
6 The General Case
We now are in a position to prove our main theorem. Suppose S is a set of n
polyhedral objects, with aspect ratio bound ff and scale factor oe. Recall that for
simplicity we assume that both ff and oe are powers of four. We partition the set
S into O(log oe) classes, C log oe, such that a polyhedron P
belongs to class C (Equivalently, the enclosing boxes of
objects in class C i have volumes between ff2 i and ff2 i+1 .) Each class behaves like a
fixed size family (the case considered in Section 4), and so we have ae(C
log oe. Any pair of classes behaves like the case considered in Section 5,
implying that ae(C i [
log oe. We can now formalize this
argument to show that
oe log 2 oe).
b , for 0 - log oe, denote the number of object pairs
enclosing boxes intersect such that Similarly, define K ij
we have the following:
log 2 oe
oe log 2 oe)
where the second inequality follows from the fact that i; j are each bounded by log oe,
and the last inequality follows directly from Theorem 5.1. This proves our main
result, which we restate in the following theorem.
Theorem 6.1 Let S be a set of n objects in the plane, with aspect ratio bound ff and
scale factor bound oe. Then,
oe log 2 oe).
7 Extension to Higher Dimensions
The 2-dimensional result might lead one to suspect that the bound in d dimensions,
for d - 2, will be O(ff oe 1=d ). In fact, the asymptotic bound in d dimension turns
out to be the same as in two dimensions-only the constant factors are different. A
closer examination shows that the exponent on oe in Theorem 6.1 arises not from the
dimension, but rather from Cauchy's inequality.
Our proof of Theorem 6.1 extends easily to d dimensions, for d - 3. The structure
of the proof remains exactly the same. We tile the d-dimensional space with boxes
(L1 balls). The main difference arises in the number of neighboring boxes for a
. While in the plane, a box has at most 5 2 neighboring boxes in the two
surrounding layers, this number increases to 5 d in d dimensions. Since our arguments
have been volume based, they hold in d dimensions as well. Our main theorem in d
dimensions can be stated as follows.
Theorem 7.1 Let S be a set of n polyhedral objects in d-space, with aspect ratio
bound ff and scale factor bound oe. Then,
oe log 2 oe), where the constant
is about 5 d .
8 Lower Bound Constructions
We first describe a construction of a family S with
The construction works in any dimension d, but for ease of exposition, we describe it
in two dimensions. See Figure 5 for illustration.
a
Figure
5: The lower bound construction showing
Consider a square box B of size ff in the standard position, namely,
ff] \Theta
ff]. We can pack roughly ff unit boxes in B, in a regular grid pattern; the
number is b
ffc 2 to be exact. We convert each of these unit boxes into a polyhedral
object of aspect ratio ff, by attaching two "wire" extensions at the two endpoints
of its main diagonal. Specifically, consider one such unit box u, the endpoints of
whose main diagonal have coordinates (a 1 ; a 2 ) and (b of u is
connected to the point (
ff) with a Manhattan path, whose ith edge is parallel
to the positive i-coordinate axes and has length p
. Similarly, the a endpoint
of u is connected to the origin with a Manhattan path, whose ith edge is parallel to
the negative i-coordinate axes and has length a i . It is easy to see that each unit box,
together with the two wire extensions forms a polyhedral object with aspect ratio ff.
By a small perturbation, we can ensure that no two objects intersect. The bounding
boxes of each object pair intersect, however, and so we have at least
bounding
box intersections in B.
We can group our n objects into bn=ffc groups, each group corresponding to a
ff-size box as above. This gives us
ff
c \Theta
On the other hand, K
We next generalize this construction to establish a lower bound of \Omega\Gamma ff
oe), assuming
that ffoe - n. See Figure 6.
objects
objects
small
d
c
large
Figure
The lower bound construction, showing
oe).
We take a square box B 0 of volume 4ffoe. We divide the lower right quadrant of
subboxes of size oe. We take a copy of the construction of Figure 5, scale
it up by a factor of oe, and put it in place of the lower right quadrant of B 0 . We
extend the wires attached to each object to the corners c, d of B 0 . Thus, the smallest
enclosing box of each object is now exactly B 0 , and aspect ratio is 4ff. These are the
big objects. Next, we take the upper-left quadrant, divide it into oe subboxes of size ff
each. At each ff-size subbox, we place a copy of the construction in Figure 5. These
are the small objects.
Altogether we want
small
objects. Since there are a total of ff locations for big objects, we superimpose X=ff
copies of the big object at each location. Similarly, there are ffoe locations for the
small objects, so superimpose Y =ffoe copies of the small object at each location. (This
is where we need the condition ffoe - n, since we want so ensure that each location
receives at least one object.) Let us now estimate bounds for K b and K o . The
enclosing box of every big object intersects the enclosing box of every small object,
we have
oe
On the other hand, the only object pair intersections exist between objects assigned
to the same location. We therefore have
Y =ffoe!
ffoe
Thus,
oe
ff
oe
for some constant c ? 0. (The ratio ff(1+
n is bounded by a constant, since ffoe - n.)
Theorem 8.1 There exists a family S of n polyhedral objects with aspect ratio bound
ff and scale factor oe such that
oe), assuming ffoe - n.
9 Applications and Concluding Remarks
Theorems 6.1 and 7.1 have two interesting consequences. The first is a theoretical
validation of the bounding box heuristic mentioned in Section 1. In practice, the
object families tend to have bounded aspect ratio and scale factor. Thus, the number
of extraneous box intersections is at most a constant factor of the number of actual
object-pair intersections. This result needs no assumption about the convexity of the
objects.
If the aspect ratio and scale factor grow with n, our theorem indicates their impact
on the efficiency of the heuristic. The degradation of the heuristic is smooth, and not
abrupt. Furthermore, the result suggests that the dependence on aspect ratio and
scale factor is not symmetric-the complexity grows linearly with ff, but only as a
square root of oe. It is common in practice to decompose complex objects into simpler
parts. Our work suggests that for collision detection purposes, reducing aspect ratio
may have higher payoff that reducing scale factor. It would be interesting to verify
empirically how this strategy performs in practice.
The second consequence of our theorems is an output sensitive algorithm for reporting
intersections among polyhedra; the bound is the strongest for convex
polyhedra in dimensions 3. We are aware of only one non-trivial result for this
problem, which holds in two dimensions. Gupta et al. [14] give an O(n 4=3 +K
algorithm for reporting K o pairs of intersecting convex polygons in the plane. The
problem is wide open in three and higher dimensions.
Our theorem leads to a significantly better result in two and three dimensions for
small aspect and scale bounds, and nearly optimal result for convex polyhedra. Given
n polyhedra in two or three dimensions, we can report all pairs whose bounding boxes
intersect in time O(n log is the number of intersecting
bounding box pairs. If the polyhedra are convex, then the narrow phase intersection
test can be performed in O(log d\Gamma1 m) time [6], assuming that all polyhedra have
been preprocessed in linear time; m is the maximum number of vertices in a polyhe-
dron. If the convex polyhedra have aspect ratio at most ff and scale factor at most
oe, then by Theorem 7.1, the total running time of the algorithm is O(n log
ff
3. If ff and oe are constants, then the running time
is O(n log m), which is nearly optimal.
Finally, an obvious open problem suggested by our work is to close the gap between
the upper and lower bounds on ae(S). We believe the correct bound is \Theta(ff
oe). Our
analysis is quite loose and the actual constants of proportionality are likely to be much
smaller than our estimates. It would be interesting to establish better constants both
theoretically and empirically.
Acknowledgement
The authors wish to thank Peter Shirley for his valuable comments on earlier versions
of the proof.
--R
An optimal algorithm for intersecting three-dimensional convex polyhedra
I-COLLIDE: An interactive and exact collision detection system for large-scale environments
Linear size binary space partitions for fat objects.
Realistic input models for geometric algorithms.
Computing the intersection-depth of polyhedra
Determining the separation of preprocessed polyhedra-a unified approach
A complete and efficient algorithm for the intersection of a general and a convex polyhedron.
A new approach to rectangle intersections (Parts I and II).
On the complexity of the union of fat objects in the plane.
Solving the Collision Detection Problem.
OBBTree: A hierarchical structure for rapid interference detection.
Detecting Intersection of a Rectangular Solid and a Convex Polyhe- dron
Efficient algorithms for counting and reporting pairwise intersection between convex polygons.
Cambridge University Press
Collision detection for fly- throughs in virtual environments
Morgan Kaufmann
Collision detection for interactive graphics applications.
Robot Motion Planning.
Fat triangles determine linearly many holes.
An Image-Based Approach to Three-Dimensional Computer Graph- ics
Data Structures and Algorithms 3: Multi-dimensional Searching and Computational Geometry
Collision Detection and Response for Computer Animation.
Computational Geometry: An Introduction.
Efficient collision detection for moving polyhedra.
A Simple and Efficient Method for Accurate Collision Detection among Deformable Objects in Arbitrary Motion.
Efficient algorithms for exact motion planning amidst fat obstacles.
--TR
Data structures and algorithms 3: multi-dimensional searching and computational geometry
Geometric and solid modeling: an introduction
Determining the separation of preprocessed polyhedra: a unified approach
An optimal algorithm for intersecting three-dimensional convex polyhedra
Fat Triangles Determine Linearly Many Holes
Solving the Collision Detection Problem
Spheres, molecules, and hidden surface removal
Detecting intersection of a rectangular solid and a convex polyhedron
Computer graphics (2nd ed. in C)
Efficient collision detection for moving polyhedra
OBBTree
Collision detection for fly-throughs in virtual environments
On the complexity of the union of fat objects in the plane
Realistic input models for geometric algorithms
An image-based approach to three-dimensional computer graphics
Analysis of a bounding box heuristic for object intersection
Robot Motion Planning
Collision Detection for Interactive Graphics Applications
Efficient Collision Detection Using Bounding Volume Hierarchies of k-DOPs
Linear Size Binary Space Partitions for Fat Objects
--CTR
Yunhong Zhou , Subhash Suri, Algorithms for minimum volume enclosing simplex in
R
Pankaj K. Agarwal , Mark de Berg , Sariel Har-Peled , Mark H. Overmars , Micha Sharir , Jan Vahrenhold, Reporting intersecting pairs of convex polytopes in two and three dimensions, Computational Geometry: Theory and Applications, v.23 n.2, p.195-207, September 2002
Orion Sky Lawlor , Laxmikant V. Kale, A voxel-based parallel collision detection algorithm, Proceedings of the 16th international conference on Supercomputing, June 22-26, 2002, New York, New York, USA | collison detection;bounding boxes;aspect ratio |
336662 | Content-based book recommending using learning for text categorization. | Recommender systems improve access to relevant products and information by making personalized suggestions based on previous examples of a user's likes and dislikes. Most existing recommender systems use collaborative filtering methods that base recommendations on other users' preferences. By contrast,content-based methods use information about an item itself to make suggestions.This approach has the advantage of being able to recommend previously unrated items to users with unique interests and to provide explanations for its recommendations. We describe a content-based book recommending system that utilizes information extraction and a machine-learning algorithm for text categorization. Initial experimental results demonstrate that this approach can produce accurate recommendations. | INTRODUCTION
There is a growing interest in recommender systems that suggest
music, films, books, and other products and services to
users based on examples of their likes and dislikes [19, 26,
11]. A number of successful startup companies like Fire-
fly, Net Perceptions, and LikeMinds have formed to provide
recommending technology. On-line book stores like Amazon
and BarnesAndNoble have popular recommendation ser-
vices, and many libraries have a long history of providing
reader's advisory services [2, 21]. Such services are important
since readers' preferences are often complex and not
readily reduced to keywords or standard subject categories,
but rather best illustrated by example. Digital libraries should
be able to build on this tradition of assisting readers by providing
cost-effective, informed, and personalized automated
recommendations for their patrons.
Existing recommender systems almost exclusively utilize a
form of computerized matchmaking called collaborative or
social filtering. The system maintains a database of the preferences
of individual users, finds other users whose known
preferences correlate significantly with a given patron, and
recommends to a person other items enjoyed by their matched
patrons. This approach assumes that a given user's tastes are
generally the same as another user of the system and that a
sufficient number of user ratings are available. Items that
have not been rated by a sufficient number of users cannot
be effectively recommended. Unfortunately, statistics on library
use indicate that most books are utilized by very few
patrons [12]. Therefore, collaborative approaches naturally
tend to recommend popular titles, perpetuating homogeneity
in reading choices. Also, since significant information
about other users is required to make recommendations, this
approach raises concerns about privacy and access to proprietary
customer data.
Learning individualized profiles from descriptions of examples
(content-based recommending [3]), on the other hand,
allows a system to uniquely characterize each patron without
having to match their interests to someone else's. Items
are recommended based on information about the item itself
rather than on the preferences of other users. This also allows
for the possibility of providing explanations that list content
features that caused an item to be recommended; potentially
giving readers confidence in the system's recommendations
and insight into their own preferences. Finally, a content-based
approach can allow users to provide initial subject information
to aid the system.
Machine learning for text-categorization has been applied to
content-based recommending of web pages [25] and newsgroup
messages [15]; however, to our knowledge has not
previously been applied to book recommending. We have
been exploring content-based book recommending by applying
automated text-categorization methods to semi-structured
text extracted from the web. Our current prototype system,
LIBRA (Learning Intelligent Book Recommending Agent),
uses a database of book information extracted from web pages
at Amazon.com. Users provide 1-10 ratings for a selected set
of training books; the system then learns a profile of the user
using a Bayesian learning algorithm and produces a ranked
list of the most recommended additional titles from the sys-
tem's catalog.
As evidence for the promise of this approach, we present initial
experimental results on several data sets of books randomly
selected from particular genres such as mystery, sci-
ence, literary fiction, and science fiction and rated by different
users. We use standard experimental methodology from
machine learning and present results for several evaluation
metrics on independent test data including rank correlation
coefficient and average rating of top-ranked books.
The remainder of the paper is organized as follows. Section
2 provides an overview of the system including the algorithm
used to learn user profiles. Section 3 presents results of our
initial experimental evaluation of the system. Section 4 discusses
topics for further research, and section 5 presents our
conclusions on the advantages and promise of content-based
book recommending.
SYSTEM DESCRIPTION
Extracting Information and Building a Database
First, an Amazon subject search is performed to obtain a
list of book-description URL's of broadly relevant titles. LIBRA
then downloads each of these pages and uses a simple
pattern-based information-extraction system to extract data
about each title. Information extraction (IE) is the task of locating
specific pieces of information from a document, thereby
obtaining useful structured data from unstructured text [16,
9]. Specifically, it involves finding a set of substrings from
the document, called fillers, for each of a set of specified
slots. When applied to web pages instead of natural language
text, such an extractor is sometimes called a wrapper [14].
The current slots utilized by the recommender are: title, au-
thors, synopses, published reviews, customer comments, related
authors, related titles, and subject terms. Amazon produces
the information about related authors and titles using
collaborative methods; however, LIBRA simply treats them
as additional content about the book. Only books that have at
least one synopsis, review or customer comment are retained
as having adequate content information. A number of other
slots are also extracted (e.g. publisher, date, ISBN, price,
etc.) but are currently not used by the recommender. We
have initially assembled databases for literary fiction (3,061
titles), science fiction (3,813 titles), mystery (7,285 titles),
and science (6,177 titles).
Since the layout of Amazon's automatically generated pages
is quite regular, a fairly simple extraction system is suffi-
cient. LIBRA's extractor employs a simple pattern matcher
that uses pre-filler, filler, and post-filler patterns for each slot,
as described by [6]. In other applications, more sophisticated
information extraction methods and inductive learning of extraction
rules might be useful [7].
The text in each slot is then processed into an unordered bag
of words (tokens) and the examples represented as a vector
of bags of words (one bag for each slot). A book's title and
authors are also added to its own related-title and related-
author slots, since a book is obviously "related" to itself, and
this allows overlap in these slots with books listed as related
to it. Some minor additions include the removal of a small list
of stop-words, the preprocessing of author names into unique
tokens of the form first-initial last-name and the grouping of
the words associated with synopses, published reviews, and
customer comments all into one bag (called "words").
Learning a Profile
Next, the user selects and rates a set of training books. By
searching for particular authors or titles, the user can avoid
scanning the entire database or picking selections at random.
The user is asked to provide a discrete 1-10 rating for each
selected title.
The inductive learner currently employed by LIBRA is a bag-
of-words naive Bayesian text classifier [22] extended to handle
a vector of bags rather than a single bag. Recent experimental
results [10, 20] indicate that this relatively simple approach
to text categorization performs as well or better than
many competing methods. LIBRA does not attempt to predict
the exact numerical rating of a title, but rather just a total
ordering (ranking) of titles in order of preference. This task
is then recast as a probabilistic binary categorization problem
of predicting the probability that a book would be rated
as positive rather than negative, where a user rating of 1-5
is interpreted as negative and 6-10 as positive. As described
below, the exact numerical ratings of the training examples
are used to weight the training examples when estimating the
parameters of the model.
Specifically, we employ a multinomial text model [20], in
which a document is modeled as an ordered sequence of
word events drawn from the same vocabulary, V . The "naive
Bayes" assumption states that the probability of each word
event is dependent on the document class but independent of
the word's context and position. For each class, c j , and word
or token, w
must be estimated from the training data. Then the posterior
probability of each class given a document, D, is computed
using Bayes rule:
Y
where a i is the ith word in the document, and jDj is the
length of the document in words. Since for any given docu-
ment, the prior P (D) is a constant, this factor can be ignored
if all that is desired is a ranking rather than a probability es-
timate. A ranking is produced by sorting documents by their
odds ratio, P represents the positive
class and c 0 represents the negative class. An example
is classified as positive if the odds are greater than 1, and
negative otherwise.
In our case, since books are represented as a vector of "doc-
uments," dm , one for each slot (where s m denotes the mth
slot), the probability of each word given the category and the
slot, must be estimated and the posterior category
probabilities for a book, B, computed using:
Y
Y
where S is the number of slots and ami is the ith word in the
mth slot.
Parameters are estimated from the training examples as fol-
lows. Each of the N training books, B e (1 - e - N ) is given
two real weights, 0 - ff ej - 1, based on scaling it's user rat-
positive weight, ff
a negative weight ff . If a word appears n times
in an example B e , it is counted as occurring ff e1 n times in a
positive example and ff e0 n times in a negative example. The
model parameters are therefore estimated as follows:
ff ej =N (3)
where n kem is the count of the number of times word w k
appears in example B e in slot s m , and
denotes the total weighted length of the documents in category
c j and slot s m .
These parameters are "smoothed" using Laplace estimates to
avoid zero probability estimates for words that do not appear
in the limited training sample by redistributing some of
the probability mass to these items using the method recommended
in [13]. Finally, calculation with logarithms of probabilities
is used to avoid underflow.
The computational complexity of the resulting training (test-
ing) algorithm is linear in the size of the training (testing)
data. Empirically, the system is quite efficient. In the experiments
on the LIT1 data described below, the current Lisp
implementation running on a Sun Ultra 1 trained on 20 examples
in an average of 0.4 seconds and on 840 examples in
Slot Word Strength
WORDS ZUBRIN 9.85
WORDS SMOLIN 9.39
WORDS TREFIL 8.77
WORDS DOT 8.67
WORDS ALH 7.97
WORDS MANNED 7.97
RELATED-TITLES SETTLE 7.91
RELATED-TITLES CASE 7.91
RELATED-AUTHORS A RADFORD 7.63
WORDS LEE 7.57
WORDS MORAVEC 7.57
WORDS WAGNER 7.57
RELATED-TITLES CONNECTIONIST 7.51
RELATED-TITLES BELOW 7.51
Table
1: Sample Positive Profile Features
an average of 11.5 seconds, and probabilistically categorized
new test examples at an average rate of about 200 books per
second. An optimized implementation could no doubt significantly
improve performance even further.
A profile can be partially illustrated by listing the features
most indicative of a positive or negative rating. Table 1 presents
the top 20 features for a sample profile learned for recommending
science books. Strength measures how much more
likely a word in a slot is to appear in a positively rated book
than a negatively rated one, computed as:
Producing, Explaining, and Revising Recommendations
Once a profile is learned, it is used to predict the preferred
ranking of the remaining books based on posterior probability
of a positive categorization, and the top-scoring recommendations
are presented to the user.
The system also has a limited ability to "explain" its recommendations
by listing the features that most contributed
to its high rank. For example, given the profile illustrated
above, LIBRA presented the explanation shown in Table 2.
The strength of a cue in this case is multiplied by the number
of times it appears in the description in order to fully
indicate its influence on the ranking. The positiveness of a
feature can in turn be explained by listing the user's training
examples that most influenced its strength, as illustrated in
Table
3 where "Count" gives the number of times the feature
appeared in the description of the rated book.
After reviewing the recommendations (and perhaps disrec-
ommendations), the user may assign their own rating to examples
they believe to be incorrectly ranked and retrain the
The Fabric of Reality:
The Science of Parallel Universes- And Its Implications
by David Deutsch recommended because:
Slot Word Strength
WORDS MULTIVERSE 75.12
WORDS UNIVERSES 25.08
WORDS REALITY 22.96
WORDS UNIVERSE 15.55
WORDS QUANTUM 14.54
WORDS INTELLECT 13.86
WORDS OKAY 13.75
WORDS RESERVATIONS 11.56
WORDS DENIES 11.56
WORDS EVOLUTION 11.02
WORDS WORLDS 10.10
WORDS SMOLIN 9.39
WORDS ONE 8.50
WORDS IDEAS 8.35
WORDS THEORY 8.28
WORDS IDEA 6.96
WORDS IMPLY 6.47
WORDS GENIUSES 6.47
Table
2: Sample Recommendation Explanation
The word UNIVERSES is positive due to your ratings:
Title Rating Count
The Life of the
Before the Beginning : Our Universe and Others 8 7
Unveiling the Edge of Time
Black Holes : A Traveler's Guide 9 3
The Inflationary Universe 9 2
Table
3: Sample Feature Explanation
system to produce improved recommendations. As with relevance
feedback in information retrieval [27], this cycle can
be repeated several times in order to produce the best results.
Also, as new examples are provided, the system can track any
change in a user's preferences and alter its recommendations
based on the additional information.
Methodology
Data Collection Several data sets were assembled to evaluate
LIBRA. The first two were based on the first 3,061
adequate-information titles (books with at least one abstract,
review, or customer comment) returned for the subject search
"literature fiction." Two separate sets were randomly selected
from this dataset, one with 936 books and one with 935, and
rated by two different users. These sets will be called
and LIT2, respectively. The remaining sets were based on
all of the adequate-information Amazon titles for "mystery"
(7,285 titles), "science" (6,177 titles), and "science fiction"
(3,813 titles). From each of these sets, 500 titles were chosen
at random and rated by a user (the same user rated both the
science and science fiction books). These sets will be called
Data Number Exs Avg. Rating % Positive (r ? 5)
MYST 500 7.00 74.4
SCI 500 4.15 31.2
Table
4: Data Information
Rating
Table
5: Data Rating Distributions
MYST, SCI, and SF, respectively.
In order to present a quantitative picture of performance on
a realistic sample; books to be rated where selected at ran-
dom. However, this means that many books may not have
been familiar to the user, in which case, the user was asked
to supply a rating based on reviewing the Amazon page describing
the book. Table 4 presents some statistics about the
data and Table 5 presents the number of books in each rating
category. Note that overall the data sets have quite different
ratings distributions.
Performance Evaluation To test the system, we performed
10-fold cross-validation, in which each data set is randomly
segments and results are averaged
trials, each time leaving a separate segment out for
independent testing, and training the system on the remaining
data [22]. In order to observe performance given varying
amounts of training data, learning curves were generated by
testing the system after training on increasing subsets of the
overall training data. A number of metrics were used to measure
performance on the novel test data, including:
ffl Classification accuracy (Acc): The percentage of examples
correctly classified as positive or negative.
ffl Recall (Rec): The percentage of positive examples classified
as positive.
ffl Precision (Pr): The percentage of examples classified as
positive which are positive.
ffl Precision at Top 3 (Pr3): The percentage of the 3 top ranked
examples which are positive.
ffl Precision at Top 10 (Pr10): The percentage of the 10 top
ranked examples which are positive.
ffl F-Measure weighted average of precision and recall
frequently used in information retrieval:
Data N Acc Rec Pr Pr3 Pr10 F Rt3 Rt10 r s
MYST 100 86.6 95.2 87.2 93.3 94.0 90.9 8.70 8.69 0.55
MYST 450 85.8 93.2 88.1 96.7 98.0 90.5 8.90 8.97 0.61
SCI 100 81.8 74.4 72.2 93.3 83.0 72.3 8.50 7.29 0.65
SCI 450 85.2 79.1 76.8 93.3 89.0 77.2 8.57 7.71 0.71
SF 100 76.4 65.7 46.2 80.0 56.0 52.4 7.00 5.75 0.40
Table
Summary of Results
ffl Rating of Top 3 (Rt3): The average user rating assigned to
the 3 top ranked examples.
ffl Rating of Top 10 (Rt10): The average user rating assigned
to the 10 top ranked examples.
ffl Rank Correlation (r s correlation coefficient
between the system's ranking and that imposed by the
users ratings ties are handled using the
method recommended by [1].
The top 3 and top 10 metrics are given since many users will
be primarily interested in getting a few top-ranked recom-
mendations. Rank correlation gives a good overall picture of
how the system's continuous ranking of books agrees with
the user's, without requiring that the system actually predict
the numerical rating score assigned by the user. A correlation
coefficient of 0.3 to 0.6 is generally considered "moderate"
and above 0.6 is considered "strong."
Basic Results
The results are summarized in Table 6, where N represents
the number of training examples utilized and results are shown
for a number of representative points along the learning curve.
Overall, the results are quite encouraging even when the system
is given relatively small training sets. The SF data set is
clearly the most difficult since there are very few highly-rated
books.
The "top n" metrics are perhaps the most relevant to many
users. Consider precision at top 3, which is fairly consistently
in the 90% range after only 20 training examples (the
exceptions are LIT1 until 70 examples 1 and SF until 450
examples). Therefore, LIBRA's top recommendations are
highly likely to be viewed positively by the user. Note that
the "% Positive" column in Table 4 gives the probability that
a randomly chosen example from a given data set will be
positively rated. Therefore, for every data set, the top 3 and
recommendations are always substantially more likely
than random to be rated positively, even after only 5 training
examples.
1 References to performance at 70 and 300 examples are based on learning
curve data not included in the summary in Table 6.
Correlation
Coefficient
Training Examples
LIBRA
LIBRA-NR
Figure
1:
Considering the average rating of the top 3 recommenda-
tions, it is fairly consistently above an 8 after only 20 training
examples (the exceptions again are LIT1 until 100 examples
and SF). For every data set, the top 3 and top 10 recommendations
are always rated substantially higher than a randomly
selected example (cf. the average rating from Table 4).
Looking at the rank correlation, except for SF, there is at
least a moderate correlation (r s - 0:3) after only 10 exam-
ples, and SF exhibits a moderate correlation after 40 exam-
ples. This becomes a strong correlation (r s - 0:6) for LIT1
after only 20 examples, for LIT2 after 40 examples, for SCI
after 70 examples, for MYST after 300 examples, and for SF
after 450 examples.
Results on the Role of Collaborative Content
Since collaborative and content-based approaches to recommending
have somewhat complementary strengths and weak-
nesses, an interesting question that has already attracted some
initial attention [3, 4] is whether they can be combined to
produce even better results. Since LIBRA exploits content
about related authors and titles that Amazon produces using
collaborative methods, an interesting question is whether this
collaborative content actually helps its performance. To examine
this issue, we conducted an "ablation" study in which
the slots for related authors and related titles were removed
from LIBRA's representation of book content. The resulting
system, called LIBRA-NR, was compared to the original one
using the same 10-fold training and test sets. The statistical
significance of any differences in performance between
the two systems was evaluated using a 1-tailed paired t-test
requiring a significance level of p ! 0:05.
Overall, the results indicate that the use of collaborative content
has a significant positive effect. Figures 1, 2, and
3, show sample learning curves for different important metrics
for a few data sets. For the LIT1 rank-correlation results
shown in Figure 1, there is a consistent, statistically-
significant difference in performance from 20 examples on-20406080100
Precision
TopTraining Examples
LIBRA
LIBRA-NR
Figure
2: MYST Precision at Top 1013570 50 100 150 200 250 300 350 400 450
Rating
TopTraining Examples
LIBRA
LIBRA-NR
Figure
3: SF Average Rating of Top 3
ward. For the MYST results on precision at top 10 shown in
Figure
2, there is a consistent, statistically-significant difference
in performance from 40 examples onward. For the SF
results on average rating of the top 3, there is a statistically-
significant difference at 10, 100, 150, 200, and 450 examples.
The results shown are some of the most consistent differences
for each of these metrics; however, all of the datasets
demonstrate some significant advantage of using collaborative
content according to one or more metrics. Therefore, information
obtained from collaborative methods can be used
to improve content-based recommending, even when the actual
user data underlying the collaborative method is unavailable
due to privacy or proprietary concerns.
We are currently developing a web-based interface so that
LIBRA can be experimentally evaluated in practical use with
a larger body of users. We plan to conduct a study in which
each user selects their own training examples, obtains recom-
mendations, and provides final informed ratings after reading
one or more selected books.
Another planned experiment is comparing LIBRA's content-based
approach to a standard collaborative method. Given
the constrained interfaces provided by existing on-line rec-
ommenders, and the inaccessibility of the underlying proprietary
user data, conducting a controlled experiment using the
exact same training examples and book databases is difficult.
However, users could be allowed to use both systems and
evaluate and compare their final recommendations. 2
Since many users are reluctant to rate large number of training
examples, various machine-learning techniques for maximizing
the utility of small training sets should be utilized.
One approach is to use unsupervised learning over unrated
book descriptions to improve supervised learning from a smaller
number of rated examples. A successful method for doing
this in text categorization is presented in [23]. Another approach
is active learning, in which examples are acquired
incrementally and the system attempts to use what it has already
learned to limit training by selecting only the most
informative new examples for the user to rate [8]. Specific
techniques for applying this to text categorization have been
developed and shown to significantly reduce the quantity of
labeled examples required [17, 18].
A slightly different approach is to advise users on easy and
productive strategies for selecting good training examples
themselves. We have found that one effective approach is to
first provide a small number of highly rated examples (which
are presumably easy for users to generate), running the system
to generate initial recommendations, reviewing the top
recommendations for obviously bad items, providing low ratings
for these examples, and retraining the system to obtain
new recommendations. We intend to conduct experiments on
the existing data sets evaluating such strategies for selecting
training examples.
Studying additional ways of combining content-based and
collaborative recommending is particularly important. The
use of collaborative content in LIBRA was found to be use-
ful, and if significant data bases of both user ratings and item
content are available, both of these sources of information
could contribute to better recommendations [3, 4]. One additional
approach is to automatically add the related books
of each rated book as additional training examples with the
same (or similar) rating, thereby using collaborative information
to expand the training examples available for content-based
recommending.
A list of additional topics for investigation include the following
ffl Allowing a user to initially provide keywords that are of
known interest (or disinterest), and incorporating this information
into learned profiles by biasing the parameter esti-
2 Amazon has already made significantly more income from the first author
based on recommendations provided by LIBRA than those provided by
its own recommender system; however, this is hardly a rigorous, unbiased
comparison.
mates for these words [24].
ffl Comparing different text-categorization algorithms: In addition
to more sophisticated Bayesian methods, neural-network
and case-based methods could be explored.
Combining content extracted from multiple sources: For
example, combining information about a title from Amazon,
BarnesAndNoble, on-line library catalogs, etc.
ffl Using full-text as content: A digital library should be able
to efficiently utilize the complete on-line text, as well as abstracted
summaries and reviews, to recommend items.
CONCLUSIONS
The ability to recommend books and other information sources
to users based on their general interests rather than specific
enquiries will be an important service of digital libraries.
Unlike collaborative filtering, content-based recommending
holds the promise of being able to effectively recommend unrated
items and to provide quality recommendations to users
with unique, individual tastes. LIBRA is an initial content-based
book recommender which uses a simple Bayesian learning
algorithm and information about books extracted from
the web to recommend titles based on training examples supplied
by an individual user. Initial experiments indicate that
this approach can efficiently provide accurate recommendations
in the absence of any information about other users.
In many ways, collaborative and content-based approaches
provide complementary capabilities. Collaborative methods
are best at recommending reasonably well-known items to
users in a communities of similar tastes when sufficient user
data is available but effective content information is not. Content-based
methods are best at recommending unpopular items to
users with unique tastes when sufficient other user data is
unavailable but effective content information is easy to ob-
tain. Consequently, as discussed above, methods for integrating
these approaches will perhaps provide the best of both
worlds.
Finally, we believe that methods and ideas developed in machine
learning research [22] are particularly useful for content-based
recommending, filtering, and categorization, as well as
for integrating with collaborative approaches [5, 4]. Given
the future potential importance of such services to digital li-
braries, we look forward to an increasing application of machine
learning techniques to these challenging problems.
ACKNOWLEDGEMENTS
Thanks to Paul Bennett for contributing ideas, software, and
data, and to Tina Bennett for contributing data. This research
was partially supported by the National Science Foundation
through grant IRI-9704943.
--R
The New Statistical Analysis of Data.
Laying a firm foundation: Administrative support for readers' advisory services.
Recommendation as classification: Using social and content-based information in recommendation
Learning collaborative information filters.
Relational learning of pattern-match rules for information extraction
Empirical methods in information extrac- tion
Improving generalization with active learning.
A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization.
Papers from the AAAI
Improving simple Bayes.
Wrapper induction for information extraction.
Learning to filter netnews.
A performance evaluation of text-analysis technologies
Heterogeneous uncertainty sampling for supervised learning.
Active learning with committees for text categorization.
Agents that reduce work and information overload.
A comparison of event models for naive Bayes text classification.
Developing Readers' Advisory Services: Concepts and Committ- ments
Machine Learning.
Learning to classify text from labeled and unlabeled documents.
The identification of interesting web sites.
Improving retrieval performance by relevance feedback.
--TR
User models: theory, method, and practice
Learning internal representations by error propagation
An evaluation of text analysis technologies
Using collaborative filtering to weave an information tapestry
The Utility of Knowledge in Inductive Learning
C4.5: programs for machine learning
Agents that reduce work and information overload
Theory refinement combining analytical and empirical methods
Improving Generalization with Active Learning
GroupLens
Knowledge-based artificial neural networks
Recommender systems
Fab
Feature selection, perception learning, and a usability case study for text categorization
Learning and Revising User Profiles
Comparing feature-based and clique-based user models for movie selection
Recommendation as classification
Learning to classify text from labeled and unlabeled documents
A re-examination of text categorization methods
Relational learning of pattern-match rules for information extraction
Combining collaborative filtering with personal agents for better recommendations
An Evaluation of Statistical Approaches to Text Categorization
Machine Learning
A Comparative Study on Feature Selection in Text Categorization
A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization
Learning Collaborative Information Filters
--CTR
Gary Geisler , David McArthur , Sarah Giersch, Developing recommendation services for a digital library with uncertain and changing data, Proceedings of the 1st ACM/IEEE-CS joint conference on Digital libraries, p.199-200, January 2001, Roanoke, Virginia, United States
Kai Yu , Anton Schwaighofer , Volker Tresp , Xiaowei Xu , Hans-Peter Kriegel, Probabilistic Memory-Based Collaborative Filtering, IEEE Transactions on Knowledge and Data Engineering, v.16 n.1, p.56-69, January 2004
Fiona Y. Chan , William K. Cheung, Customizing digital storefronts using the knowledge-based approach, Information management: support systems & multimedia technology, Idea Group Publishing, Hershey, PA,
Tomoharu Iwata , Kazumi Saito , Takeshi Yamada, Recommendation method for extending subscription periods, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Junichi Iijima , Sho Ho, Common structure and properties of filtering systems, Electronic Commerce Research and Applications, v.6 n.2, p.139-145, Summer 2007
Ana Gil , Francisco Garca, E-commerce recommenders: powerful tools for E-business, Crossroads, v.10 n.2, p.6-6, 31 August 2004
Kai Yu , Volker Tresp , Shipeng Yu, A nonparametric hierarchical bayesian framework for information filtering, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Hahn-Ming Lee , Chi-Chun Huang , Tzu-Ting Kao, Personalized Course Navigation Based on Grey Relational Analysis, Applied Intelligence, v.22 n.2, p.83-92, March 2005
Wei-Po Lee , Cheng-Che Lu, Customising WAP-based information services on mobile networks, Personal and Ubiquitous Computing, v.7 n.6, p.321-330, December
Justin Basilico , Thomas Hofmann, Unifying collaborative and content-based filtering, Proceedings of the twenty-first international conference on Machine learning, p.9, July 04-08, 2004, Banff, Alberta, Canada
Prem Melville , Raymod J. Mooney , Ramadass Nagarajan, Content-boosted collaborative filtering for improved recommendations, Eighteenth national conference on Artificial intelligence, p.187-192, July 28-August 01, 2002, Edmonton, Alberta, Canada
Hung-Chen Chen , Arbee L. P. Chen, A music recommendation system based on music and user grouping, Journal of Intelligent Information Systems, v.24 n.2, p.113-132, May 2005
Andrew I. Schein , Alexandrin Popescul , Lyle H. Ungar , David M. Pennock, Methods and metrics for cold-start recommendations, Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, August 11-15, 2002, Tampere, Finland
Zan Huang , Wingyan Chung , Thian-Huat Ong , Hsinchun Chen, A graph-based recommender system for digital library, Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries, July 14-18, 2002, Portland, Oregon, USA
Daniel M. Fleder , Kartik Hosanagar, Recommender systems and their impact on sales diversity, Proceedings of the 8th ACM conference on Electronic commerce, June 11-15, 2007, San Diego, California, USA
Andrew I. Schein , Alexandrin Popescul , Lyle H. Ungar , David M. Pennock, CROC: A New Evaluation Criterion for Recommender Systems, Electronic Commerce Research, v.5 n.1, p.51-74, January 2005
Yiyang Zhang , Jianxin (Roger) Jiao, An associative classification-based recommendation system for personalization in B2C e-commerce applications, Expert Systems with Applications: An International Journal, v.33 n.2, p.357-367, August, 2007
Zan Huang , Wingyan Chung , Hsinchun Chen, A graph model for E-commerce recommender systems, Journal of the American Society for Information Science and Technology, v.55 n.3, p.259-274, February 2004
George Lekakos , George M. Giaglis, Improving the prediction accuracy of recommendation algorithms: Approaches anchored on human factors, Interacting with Computers, v.18 n.3, p.410-431, May, 2006
Marco Degemmis , Pasquale Lops , Giovanni Semeraro, A content-collaborative recommender that exploits WordNet-based user profiles for neighborhood formation, User Modeling and User-Adapted Interaction, v.17 n.3, p.217-255, July 2007
Przemysaw Kazienko , Micha Adamski, AdROSA-Adaptive personalization of web advertising, Information Sciences: an International Journal, v.177 n.11, p.2269-2295, June, 2007
Raymond J. Mooney , Razvan Bunescu, Mining knowledge from text using information extraction, ACM SIGKDD Explorations Newsletter, v.7 n.1, p.3-10, June 2005
Michael Bieber , Douglas Engelbart , Richard Furuta , Starr Roxanne Hiltz , John Noll , Jennifer Preece , Edward A. Stohr , Murray Turoff , Bartel Van De Walle, Toward Virtual Community Knowledge Evolution, Journal of Management Information Systems, v.18 n.4, p.11-35, Number 4/Spring 2002
Saverio Perugini , Marcos Andr Gonalves , Edward A. Fox, Recommender Systems Research: A Connection-Centric Survey, Journal of Intelligent Information Systems, v.23 n.2, p.107-143, September 2004
Loren Terveen , David W. McDonald, Social matching: A framework and research agenda, ACM Transactions on Computer-Human Interaction (TOCHI), v.12 n.3, p.401-434, September 2005
Uri Hanani , Bracha Shapira , Peretz Shoval, Information Filtering: Overview of Issues, Research and Systems, User Modeling and User-Adapted Interaction, v.11 n.3, p.203-259, August 2001
Lee Keener, Bibliography, Technology supporting business solutions: Advances in computation: Theory and practice, Nova Science Publishers, Inc., Commack, NY, | recommender systems;text categorization;information filtering;machine learning |
337106 | Hybrid Fault Simulation for Synchronous Sequential Circuits. | We present a fault simulator for synchronous sequential circuits that combines the efficiency of three-valued logic simulation with the exactness of a symbolic approach. The simulator is hybrid in the sense that three different modes of operationthree-valued, symbolic and mixedare supported. We demonstrate how an automatic switching between the modes depending on the computational resources and the properties of the circuit under test can be realized, thus trading off time/space for accuracy of the computation. Furthermore, besides the usual Single Observation Time Test for the evaluation of the fault coverage, the simulator supports evaluation according to the more general Multiple Observation Time Test Strategy (MOT). Numerous experiments are given to demonstrate the feasibility and efficiency of our approach. In particular, it is shown that, at the expense of a reasonable time penalty, the exactness of the fault coverage computation can be improved even for the largest benchmark functions. | Introduction
Simulation is a basic technique applied in many areas of electronic design. As is well known, the
task of simulation at gate level is to determine values (in a given logic) for every lead of the circuit
with respect to a set of primary input assignments. Also in the testing area numerous tools use
simulation as a fundamental underlying algorithm: E.g. the quality of classical Automatic Test
Pattern Generation (ATPG) tools [2] significantly relies on efficient fault simulation, a specific
type of simulation, where the current test patterns are simulated to determine all faults of
a fault model that are also detected by the computed patterns. More recently, a new type
of ATPG tool, the so-called Genetic Algorithm-based tool [30, 14] has emerged. Here, fault
simulation as the core algorithm plays an even more important role. Apart from this, test set
compaction, switching activity computation, signal probability computation, and fault diagnosis
provide further examples of the application of simulation/fault simulation in testing.
Here, we are mainly interested in fault simulation for synchronous sequential circuits. Several
gate level based fault simulation algorithms are known, e.g. [12, 21, 24, 3]. In general, these
simulators focus on performance, and accuracy is not a main concern. On the other hand, if
there is no information about the initial state of the circuit available, the algorithm has to deal
with this unknown initial state.
Very often a three-valued logic (0,1 and X for modeling an unknown value) is used. It is well
known, that in general only a lower bound for the fault coverage is determined. Even for some of
the usual benchmarks [4] the gap between this lower bound and the real fault coverage is large.
The reason for this gap is the inherent inaccuracy of the three-valued logic. For instance, there
are circuits whose synchronizing sequence cannot be verified using three-valued logic [23].
synchronizing sequence is an input sequence which drives the circuit into a unique state starting
in any initial state.)
A lot of work has been done to overcome these problems. On the one hand, if possible,
changes made already during the design phase may help: The problem of synchronization can
be bypassed by implementing a full-reset or a full scan environment and additionally making the
assumption that the added circuitry is fault-free. However, besides this assumption there are
some further disadvantages with such approaches, e.g. the long and sometimes inadequate test
evaluation for full scan circuits, due to the scan-in and scan-out overhead. Also, the area and
delay penalty for full scan circuits might be high and unacceptable. Circuits with partial reset
have been shown to be a good alternative. For example, in [22, 28] partial reset has been used
to improve fault coverage and test length for a given circuit. Thus, assuming a (partially) non-
resetable circuit has its advantages. Furthermore, a sophisticated state assignment procedure
[8] may avoid initializability problems. Nevertheless, the above methods can only be applied
during synthesis and not if an already designed circuit is considered.
In that case, the fault simulation algorithm itself has to handle the unknown initial state.
One possibility, apart from the standard way of using three-valued logic, is complete simulation,
where the unknown values are successively simulated for all possible combinations of 0 and 1
[7, 27]. In general, this approach is only reasonable for circuits with a small number of memory
elements. A more promising approach is based on symbolic traversal techniques, well known in
the area of verification. Ordered Binary Decision Diagrams (OBDDs) [5] may be used for an
representation of the state space and its traversal, i.e. they offer the potential
of calculating the exact values for all signals in the circuit. The computation of minimum (or
almost minimum) reset sequences [16, 31] and test generation with symbolic methods [6, 9, 10]
denote successful applications of this concept.
Nevertheless, the advantage of exact computation is paid for by the complexity of handling
the OBDDs. Indeed, in practical applications it happens quite frequently that circuits leading
to large BDDs have to be treated. Thus, purely symbolic methods are either only applicable to
smaller circuits or they have to be combined, e.g., in the case of test generation, with classical
path-oriented methods to allow the handling of large circuits.
Concerning fault simulation, test generation using symbolic methods should be accompanied
by a fault simulation tool exploiting the potential of the symbolically generated test sequences.
Of course, for the reasons already mentioned before, fully exact symbolic fault simulation in general
cannot be performed for large circuits. Contrary to verification and ordinary test generation
OBDDs for sets of faults have to be constructed and kept in memory. One possibility to handle
this problem is to implement a combination with incomplete, but more efficient strategies.
The hybrid fault simulator H-FS presented in this article follows this approach. Hybrid in
our context means that the algorithm supports different simulation modes, one of them being
the symbolic mode. The simulator assumes a gate level description of the circuit and supports
the stuck-at fault model. It allows a dynamic, fully automatic switching between the modes and
thereby guarantees correctness of the transformation steps between the modes. More precisely,
H-FS uses three kinds of fault simulation procedures:
ffl a fault simulation procedure X-FS based upon three-valued logic,
ffl a symbolic fault simulation procedure B-FS based upon OBDDs, and
ffl a fault simulation procedure BX-FS which is hybrid itself in the sense that a symbolic true-
value simulation and an explicit fault simulation procedure based upon the three-valued
logic are combined.
These procedures differ in their time and space requirements and the accuracy of their fault
simulation. H-FS tries to combine the advantages of these procedures by choosing a convenient
logic before starting the simulation for the next test vector. For instance, if the space
requirements of B-FS is becoming too large, the hybrid fault simulator will select BX-FS or if
necessary X-FS. After the application of a few patterns, that possibly initialize a large number
of memory elements and thereby reduce the space requirements of B-FS, the algorithm will try
to continue with B-FS. For that reason, the hybrid fault simulation strategy works also for the
largest benchmark circuits [4]. Experiments show that H-FS is able to determine the exact fault
coverage for many benchmarks or at least a tighter lower bound than previously known.
Until now, we have considered fault simulation based on the Single Observation Time Test
Strategy (SOT) which is inaccurate in itself. To overcome the limitation of SOT a more general
definition of detectability has to be considered. This led to the Multiple Observation Time Test
Strategy (MOT) which is used e.g. in [26] to increase the efficiency of test generation. Pomeranz
and Reddy realized the necessity to support MOT-based test generation by a MOT-based fault
simulation and proposed three-valued fault simulation based on MOT in [27, 29]. For "complete"
MOT it is necessary to compare the sets of fault-free responses with the set of responses obtained
in the presence of faults (for all possible states of the circuit). This is especially time consuming
if long test sequences and a large number of memory elements exist. To overcome these problems
a restricted version of MOT (rMOT) has been proposed which nevertheless is more accurate
than SOT [26].
In addition to SOT, H-FS supports rMOT and MOT as well. It turns out that rMOT and
MOT can be included in the symbolic parts of H-FS without too much effort. The use of OBDDs
makes it possible to handle a large number of output sequences. Experiments demonstrate that
we succeed in computing the exact MOT fault coverage for many of the considered benchmark
circuits. In case the space requirements of the OBDD-based approach exceed a given limit, which
is determined by the working environment, the hybrid fault simulator may e.g. change to the
SOT strategy based on the three-valued logic for some simulation steps and then again return
to the symbolic evaluation and the MOT strategy. This guarantees that the MOT strategy can
be applied even to large circuits. In contrast to the general MOT strategy, the rMOT strategy
allows a test evaluation by comparing the output sequence of the circuit under test with the
unique output sequence of the fault-free circuit. We show experimentally that the accuracy of
fault simulation based on rMOT is almost identical with that based on MOT for many circuits.
With regard to performance it can be observed that for some circuits a symbolic rMOT fault
simulation works even more efficiently than a symbolic SOT fault simulation. Thus, the results
generated according to rMOT have all attributes important for a fault simulation algorithm:
reasonably fast simulation time, high fault coverage, and normal test evaluation.
The paper is structured as follows: Section 2 presents some definitions and important properties
of synchronous sequential circuits. In Section 3, the different fault simulation components
and their properties are described. The resulting hybrid fault simulator H-FS based on SOT
and the symbolic extensions for MOT are explained in Section 4. Section 5 gives some experi-
ments, which demonstrate the efficiency of the presented hybrid fault simulator. We finish with
a summary of the results in Section 6.
Preliminaries
In this section we repeat some basic definitions and notation necessary for the understanding of
the paper. SOT, MOT and rMOT are introduced. Finally, the complexity of fault simulation
for sequential circuits is briefly analyzed from a theoretical point of view.
2.1 Basic Definitions and Notation
As is well known, the input/output - behavior of a synchronous sequential circuit can be described
by a Finite State Machine (FSM) [18]; an illustration is given in Figure 1.
More formally, a finite state machine M is defined as a 5-tuple
is the input set, O the output set, and S the state set. ffi : S \Theta I ! S is the next state function,
O is the output function.
combi-
national
logic
INPUT
OUTPUT
Figure
1: Model of a finite state machine
Since we consider a gate level realization of a FSM, we have
the number of primary inputs (PIs), l the number of primary outputs (POs),
and m the number of memory elements. ffi and - are computed by a combinational circuit. The
inputs of the combinational circuit which are connected to the outputs of the memory elements
are called secondary inputs (SIs) or present-state variables. Analogously, the outputs of the
combinational circuit connected to the inputs of the memory elements are called secondary
outputs (SOs) or next-state variables.
For the description of our algorithms we use the following
denotes an input sequence of length n with
denotes the value that is assigned to the i-th PI before starting simulation at time
step t. (s(p; 0); denotes the state sequence defined by Z, the initial state
and the next state function ffi, i.e.
is the output sequence defined by the initial state p, input sequence
Z and output function -, i.e. o(p; n. The notion
l is used to denote the value at the i-th PO after simulation in time step t.
As usual, the behavior of a circuit affected by a stuck-at fault f is described by a faulty FSM
, the states s f (p; t) and the outputs are defined analogously
to the fault-free case.
Notice that for the case of an unknown initial state, we have to assume that the initial state
of the fault-free machine and the faulty machine as well may be any element in B m .
2.2 Fault Detection in Sequential Circuits
Detectability of faults in synchronous sequential circuits in general depends on the (possibly)
unknown initial state [1, 15, 26]. Here, we consider the Single Observation Time Test
(SOT) [1], the Multiple Observation Time Test Strategy (MOT), and a restricted version of
MOT [26] together with the single stuck-at fault model.
Definition 2.1
A fault f is SOT-detectable by an input sequence
1g such that 8 states
According to the above definition a fault is SOT-detectable if there is a unique point in time
such that independent of the initial states of both machines the Boolean output values on a
particular PO are to each other's inverse.
Figure
2: Example of SOT (MOT) fault detection.
It turns out that there are some intuitively detectable stuck-at faults which are not detectable
according to this definition. For illustration consider Figure 2 [27]. The figure shows two fault
simulation steps for the test sequence There are four possible initial state
pairs: 1)g. These four pairs can be separated into two cases:
there is a difference at the PO in time frame one,
but there is no difference at the PO in time frame two. For (p; q) 2 f(0; 1); (1; 0)g there is a
difference at the PO in time frame two, but there is no difference in time frame one. Hence,
this fault is undetectable according to SOT since there is no single time frame for that all initial
state pairs cause the faulty and fault-free outputs to differ. On the other hand, the fault is
detectable according to MOT:
Definition 2.2
A fault f is MOT-detectable by an input sequence
l such
According to MOT, there is an individual point in time for each possible initial state pair
(p; q), such that the Boolean output values on a particular PO are complementary. In other
words, there is no state pair (p; q), such that the corresponding output sequences resulting from
application of input sequence Z (to the fault-free and faulty circuit) are identical. Hence, it is
clear that MOT is more general than SOT. Furthermore, MOT-detectability is equivalent to the
fact that the set of output sequences (obtainable for a fixed sequence Z and all possible initial
states) for the fault free and faulty circuit are disjoint.
Restricted MOT requires a unique output value for the fault-free circuit, thus being less
general than MOT, but more general than SOT:
Definition 2.3
A fault f is rMOT-detectable by an input sequence
(Notice that the fault given in the example of Figure 2 is MOT-detectable, but not rMOT-
detectable.) As mentioned before, the advantage of rMOT compared to MOT results from the
fact that rMOT allows a test evaluation by comparing the output sequence of the circuit under
test with a (partially defined) unique output sequence of the fault-free circuit.
We close this section by analyzing the complexity of fault simulation for synchronous sequential
circuits. While it is well known [2] that fault simulation in combinational circuits can be
solved in O(n \Delta size of the circuit and number of patterns), the situation
is more difficult in the sequential case: The run-time of sequential fault simulation based on
three-valued logic is identical to this bound, but a three-valued simulation in general computes
only a lower bound for the fault coverage. (An example circuit will be shown later.) However,
the exact solution of the problem of sequential fault simulation, i.e. the computation of the
exact fault coverage for a given test sequence, is much more complex, even with respect to SOT.
For an analysis of the complexity of exact sequential fault simulation we consider the following
decision problem which has to be solved during fault simulation.
Instance: Synchronous sequential circuit, input sequence Z and stuck-at fault f
Question: Is f SOT-undetectable by Z?
We construct a (polynomial time) reduction of the non-tautology problem to SOT-FSIM-UNDE-
TECT. (The non-tautology problem corresponds to the question whether a given combinational
circuit C evaluates to 0 for at least one assignment of the variables.) Since non-tautology is
well-known to be NP-complete we obtain.
Theorem 2.1
SOT-FSIM-UNDETECT is NP-hard.
Sketch of the Proof: For the reduction we consider any combinational circuit C. Then
the PIs of C are replaced by memory elements, so there are no longer any PIs. The resulting
sequential circuit together with the empty input sequence and the sa-0 fault at the PO forms
an instance of SOT-FSIM-UNDETECT. Due to the unknown initial state the fault is SOT-
undetectable (for the empty input sequence) iff C is a non-tautology. 2
Note that the above result is valid also for rMOT and MOT, because a test sequence of
length 1 is considered.
From Theorem 2.1 it follows that there is no hope of finding an efficient polynomial time
algorithm for obtaining an exact solution. Thus, from the point of view of complexity theory,
using OBDDs with an exponential worst case behavior is justified, if we want to attack the
problem of finding an exact solution. Furthermore, we will see in the following how the exact
algorithm can be modified to trade off runtime of the method for exactness of the result.
3 Components of Hybrid Fault Simulation
In this section we introduce the three main components of H-FS: the three-valued fault simulation
procedure X-FS, the symbolic simulation procedure B-FS and a mixed simulation procedure
BX-FS. Since all of them follow the same basic simulation method we introduce this method in
advance to simplify the presentation.
3.1 Basic Fault Simulation Scheme
In our approach the overall fault simulation scheme corresponds to an event-driven single-fault
propagation (SFP) [2]. Since procedures with different logics are going to be combined in the
final algorithm we do not make use of the machine word length for parallel evaluation of input
patterns.
At the beginning the fault simulation procedure receives a set of faults F and a test sequence
Z of length n. Also, an encoding of the unknown initial state is defined depending on the logic
used. Then the simulation of the sequential circuit for each time frame is performed similar to
the simulation of a combinational circuit by evaluating the gates in a topological order, with
the extension that the value of a secondary input at time t, 1 - t - n, is defined by the value
of the corresponding secondary output at time t \Gamma 1.
At first, a true-value simulation is carried out. Subsequently, the faults are injected one by
one and an event-driven SFP is performed. Thereby the effects of each fault are propagated
towards the POs and SOs. If any fault reaches a PO it will be marked as detected. Of course
the fault is dropped and it will not be considered during following simulation steps. If an SO
changes, the next state of the faulty circuit is different from the next state of the fault-free
circuit and it has to be stored for the next simulation step. This means, only those memory
elements are stored whose value differ from that of the fault-free case. This helps to reduce the
memory requirements of the simulator.
3.2 Three-Valued Fault Simulation
As already mentioned fault simulation using the logic Xg, in this paper denoted
as X-FS, is the usual way to handle circuits with an unknown initial state. (0 and 1 are called
defined values. X is used to denote the unknown or undefined value.) The unknown initial state
is encoded by (X; thus representing the set of all possible binary initial states. A
fault f is marked as detectable by the input sequence Z, if during explicit fault simulation based
on logic BX a PO is reached where the fault-free and the faulty circuit compute different but
defined values. Since this difference is obtained without using any information about the state
of the (fault-free and faulty) circuit it follows immediately that the fault is SOT-detectable by
Z.
FFr
r
r
r
a
c
r
r
r
r
Figure
3: BX -uninitializable circuit.
On the other hand, the undefined value X is not able to capture dependencies between
two undefined signals and thus leads to inaccurate computations. A simple example is given
in
Figure
3, where a simulation based upon the three-valued logic is not able to verify the
synchronizing sequence given by [a=1,b=0,c=0]. (As shown in [23] the choice of the binary
encoding may be a further reason for inaccuracy preventing the verification of synchronizing
sequences with the three-valued logic.) The advantage of three-valued fault simulation is its
time and space behavior. Based upon the three-valued logic the fault simulation of a circuit C
for a test sequence of length n can be performed in time O(n \Delta jCj 2 ).
In summary, X-FS efficiently determines a lower bound for the exact fault coverage with
respect to SOT.
3.3 Symbolic Fault Simulation
Unlike X-FS, the symbolic fault simulation procedure called B-FS aims at representing the exact
values of all signals in the circuit under the assumption that the initial state is not known and
a fixed input sequence Z is given. To do so, the signal value in each time step is completely
defined by a Boolean function depending on the m memory elements and the fixed Boolean
values of the sequence Z.
Thus, B-FS is based upon the logic B i.e. the elements of the logic are
single output Boolean functions with m Boolean variables, where to each memory element a
Boolean variable p i is assigned, representing the unknown value at the beginning of the simulation
of both the fault-free and the faulty circuit. The two constant functions contained in
will be abbreviated by 0 and 1. (At the beginning of the simulation of each time step they
are assigned to the PIs according to the values of the sequence Z.) For the representation of
the elements in B s we use OBDDs. Using OBDD manipulation algorithms a symbolic fault
simulation along the general scheme presented in Section 3.1 can now be performed. According
to SOT, a fault f is marked as detectable by Z, iff there exists an output for which M and M f
lead to different but constant functions in a certain time frame. (We want to mention at this
point, that, in contrast to X-FS, the symbolic simulation scheme also offers the possibility to
determine detectability with respect to rMOT and MOT. The details are more complicated and
will be discussed separately in Section 4.2.)
Concerning time and space behavior the following should be noted: In each time step B-FS
assigns an OBDD to each lead of the circuit during true-value simulation. This OBDD must be
stored for the event-driven explicit fault simulation. Moreover, the symbolic representations for
the state vector of the fault-free circuit and all state vectors of the faulty circuits, not detected
in the previous time steps, have to be stored for the next simulation step.
In short, B-FS determines the exact fault coverage with respect to SOT. But, even when using
heuristics to find a "good" variable order, the space and time requirements may be prohibitively
high (in the worst case exponential) and prevent an application of a purely OBDD-based fault
simulation algorithm to large circuits.
3.4 Mixed-Logic Fault Simulation
We take a first step towards a hybrid fault simulation procedure by combining X-FS and B-FS.
The resulting mixed-logic fault simulation procedure BX-FS basically works as follows:
ffl The true-value simulation which is carried out only once per input vector uses the accurate
i.e. the values are represented by OBDDs and the initial state is given
by the variable sequence (p 1
ffl The expensive explicit fault simulation uses the three-valued logic BX , thus the unknown
initial state is modeled by
In the following we discuss problems emerging from the use of differing logics in the simulation
of the fault-free and of the faulty machine in combination with the SFP method.
Algorithm BX sim (Figure 4) gives a more detailed description of the mixed-logic true-value
simulation: At first the PIs and SIs are initialized (Line 1). After an OBDD-based evaluation of
the gate g (Line 2) the output-OBDD TB [output(g)] at signal output(g) is transformed according
to - (Line 3):
Here, - is a mapping realizing a transformation of symbolic values to values
0,1,X as follows:
Note that - is surjective but not injective, since all non-constant values in B s are mapped to
the value X.
Line 4 of Algorithm BX sim ensures that an OBDD is freed (and finally deleted) as soon as
the OBDD is no longer necessary. This is different from the pure symbolic simulation. There, all
OBDDs of the true-value simulation have to be stored for initializing the faulty circuits. Here,
this storing is only done for the OBDDs of the next state. The storing for the remaining signals
is done more efficiently by logic elements of BX . Note that only elements of BX are required
for the explicit fault simulation part.
procedure BX sim( z(t), s(t) )
vector
vectors of length No.of leads
(1) initialize the primary and secondary inputs
with the OBDDs z(t) and s(t) ;
for each gate g 2 G in topological order
else if (
else
all successors of input(g;
are evaluated )
Figure
4: Algorithm BX sim for mixed-logic true-value simulation
Using the transformed true-value assignment instead of using a three-valued true-value simulation
increases the accuracy of the explicit fault simulation. A simple example illustrates the
improvement. Consider the circuit shown in Figure 3 again. This circuit cannot be initialized by
the input assignment using the three-valued logic as shown in Section 3.2. However, algorithm
BX sim initializes the fault-free circuit and assigns a 1 to the primary output of the circuit,
whereas a three-valued simulation would assign the value X. Thus, a stuck-at 0 fault at the
output becomes detectable now.
On the other hand, combining different logics for fault simulation together with the efficient
method of event-driven simulation leads to a new problem. For illustration consider Figure 5,
which shows a symbolic assignment determined by the true-value simulation on the left-hand
side and the assignment after the logic transformation on the right-hand side. f , g, h, 0 and 1
are elements of B s with f , g, h 62 f0; 1g and f \Delta h 62 f0; 1g.
Now assume that a stuck-at 0 fault at u is injected and the resulting event is propagated
towards w by SFP as usually done in three-valued fault simulation; see the right-hand side of
MUX MUX MUX MUX
Figure
5: How to combine different logics in in BX-FS event-driven single-fault propagation?
Figure
5. At first, the multiplexers are evaluated. Since no event is produced at the outputs
SFP stops and the value on lead v remains 0 for the faulty circuit. If we now evaluate the OR
gate (which has an event at the right input), we obtain 1=0 at the output. Thus, the fault would
be judged to be detectable at output w. This is not correct, however, as a look at the left-hand
side of Figure 5 easily reveals.
Therefore, the event-driven SFP has to be modified to guarantee its correctness: Whenever
a gate evaluated during the event-driven single fault propagation and the value at the output of
the gate is equal to X in the faulty case, we in general do not know anything about the symbolic
value represented by X. In particular we do not know whether X stands for a value that is
identical to the value of the signal in the fault-free case. Thus, to be correct, SFP at this point
has to be continued i.e. X has to be considered as a (potential) event. Consider again Figure 5.
Evaluating both multiplexers leads to X on the multiplexer outputs in the fault-free and faulty
cases. Consequently, the following AND gate has to be evaluated and we get a 0=X event at v.
Evaluating the OR gate leads to 0=X at w and the fault is not observable, which is the correct
answer. Algorithm BX fa sim in Figure 6 describes this event propagation more precisely.
procedure BX fa sim( f , L f ,
list of faulty state values
computed by BX sim
(2) for all (lead, value) 2 L f
while exists gates g with marked input leads
then
if (output(g) is PO with
then f is SOT-detectable; exit;
s is a marked SOg;
Figure
Algorithm BX fa sim for mixed-logic explicit fault simulation
F denotes the three-valued assignment of the faulty circuit and TX the three-valued assignment
determined by Algorithm BX sim (Figure 6): After the initialization of the value vector (Line 1)
the present state of the faulty circuit is loaded from L f , the list storing the state values different
from those of the fault-free circuit. L f will be updated in Line 8 at the end of the algorithm. In
Line 3 the fault is injected and the corresponding lead is marked. As long as there is a gate in
the queue (sorted by the level of the gate), the body of the loop is executed (Line 4). After the
computation of the output value of g (Line 5), Line 6 checks for an event. If an event reaches
a PO, a fault is detected with respect to SOT (Line 7). If after the while-loop the fault is not
detected, the next state of the faulty circuit is stored (Line 8).
Based on the description of BX-FS and the considerations given above, we are now ready to
conclude the correctness of the algorithm: The true-value simulation performed by Algorithm
BX sim is similar to that of B-FS. In particular, it guarantees that all constant values at the POs
are computed. The modified event propagation as given by Algorithm BX fa sim guarantees
that all potential events are propagated. Thus, a 0=1 or 1=0 difference at a PO guarantees the
detectability of the fault according to SOT. Furthermore, a slight modification of the example
in
Figure
shows that B-FS in general detects more faults than BX-FS.
Theorem 3.1
The procedure BX-FS determines (only) a lower bound for the fault coverage with respect
to SOT. The lower bound in general is tighter than the bound determined by X-FS.
As already mentioned, the space and time requirements of BX-FS are considerably smaller than
those of B-FS, since fewer OBDDs have to be stored and fewer OBDDs have to be constructed.
Consequently, with the same parameter setting for the OBDD-manager, larger OBDDs can
be built. Thus, in terms of accuracy and complexity, BX-FS is positioned between X-FS and
B-FS. We will see in the experiments that from the practical point of view BX-FS also offers a
reasonable compromise between efficiency and accuracy.
4 The Hybrid Fault Simulator H-FS
In this section the integration of the three simulation procedures in H-FS is discussed. To
repeat our motivation, pure symbolic fault simulation with B-FS and even a mixed simulation
with BX-FS may be infeasible for specific large circuits. On the other hand, fault coverage
should be determined as precisely as possible, thus symbolic methods should be applied as often
as possible. For that reason, we developed a hybrid scheme which is able to automatically
select a suitable simulator and switch back and forth between the simulators depending on the
resources available and the properties of the circuit under test.
We firstly describe how hybrid fault simulation with respect to SOT is performed. Then, we
extend the concept to also work with rMOT and MOT.
4.1 Hybrid Fault Simulation with respect to SOT
To perform hybrid fault simulation for synchronous sequential circuits with an input sequence
Z, H-FS receives as additional inputs a space limit S max and an initial mode. S max bounds
the memory which can be used by the OBDD package. In its basic form H-FS works in three
modes based upon X-FS, B-FS, BX-FS. The modes differ in their accuracy and space/time
requirements.
performs a fault simulation using the three-valued logic. In this mode, it
works like X-FS.
tries to perform a fault simulation using BX-FS. If the space required by
the OBDD-based true-value simulation exceeds the space limit S max , H-FS selects X-FS
and simulates the next \Delta 1 time steps in mode MX , starting with the resimulation of the
current fault-free circuit. Subsequently, H-FS works in mode MBX again if necessary. A
change back to MBX is unnecessary if all memory elements of the fault-free circuit are
initialized.
tries to perform fault simulation using B-FS. If the space required by B-FS
exceeds the space limit S max , H-FS selects BX-FS and works in mode MBX for the rest
of the current and the next \Delta 2 time steps, starting with the resimulation of the current
faulty (or fault-free) circuit. Subsequently, H-FS changes to mode MB again if necessary. A
change to MB is unnecessary, if all memory elements of the faulty circuits are initialized.
Clearly, if the space limit is never exceeded by B-FS, H-FS determines the exact fault
coverage achievable with the test sequence Z.MB
input sequence
mode memory exceeded
all memory
elements initialized
I
Figure
7: Automatic switching between different simulation modes.
Figure
7 illustrates the mode selection. In this example H-FS starts in mode MB . If the memory
limit is exceeded it changes to mode MBX or MX , respectively. After \Delta i steps it returns to mode
MB . If all memory elements are initialized in mode MB (in Figure 7 at time t I ) H-FS switches
to mode MX . The values of and the space limit S max have a strong influence on the
run time and the accuracy of H-FS. For instance if no space limit is given H-FS started in mode
MB determines the exact fault coverage and has the same run time behavior as B-FS. Using a
small space limit and large values for \Delta i the accuracy and the efficiency of H-FS "converges" to
that of X-FS. In the current version of H-FS the pair defined by the user. Based on
numerous tests, we used the pair (5; 7) for our experiments in Section 5.
We now want to point out details of the switches between the three simulation modes.
According to the definition of the modes a switch from a mode to a less precise mode - a switch-
may occur during a time frame, while a switch to a more accurate mode - a switch-up -
only occurs at the beginning of a time frame. In both cases signal values have to be transformed
between the corresponding logics B s and BX .
We first consider the switch-down from MB to MBX . Here, stored symbolic state vectors
of the faulty circuits are transformed to BX by using transformation - already introduced in
Section 3.4. In particular this means, that any correlations between signal values before time
step t are lost, unless the values are constant.
Transformations for a switch-down from MBX to MX are performed analogously for the
fault-free circuit. For an example see Table 1, where the transformation of symbolic state
vectors into three-valued state vectors is illustrated. s(t) is the state vector computed by the
fault-free circuit at time t, and s f 1 (t) and s f 2 (t) are the state vectors computed by the faulty
circuits having fault f 1 and f 2 , respectively.
We now consider the transformations necessary for a switch-up. In this case values from
BX have to be replaced by symbolic values. Since - is not injective an inverse mapping cannot
\Gamma!
Table
1: Transformation of symbolic state vectors into three-valued state vectors.
be defined. We make use of the fact that a switch-up is performed only at the beginning of a
time step and therefore an "inverse" transformation has to be applied only to state vectors. Let
am ) be a three-valued state vector. Then the transformation is defined by -
1 if a
An example for the transformation of three-valued state vectors into the corresponding symbolic
state vectors after a time step t is shown in Table 2. If the i-th component of a state
vector is undefined, it is replaced by p i , otherwise, the corresponding defined value is used. In
the following time steps, the OBDDs are defined only over the remaining p i variables, resulting
in a (much) smaller memory demand.
s
\Gamma!
Table
2: Transformation of three-valued state vectors into symbolic state vectors.
Of course, the efficiency of H-FS depends on the selected mode. To improve the efficiency,
H-FS checks after each simulation step whether there is any memory element left that is not
initialized either in the fault-free circuit or in any faulty circuit. H-FS automatically changes
from mode MB to mode MX if all memory elements of the fault-free circuit and of the faulty
circuits are initialized because gate evaluations based upon the three-valued logic are much more
efficient than gate evaluations based upon OBDDs. See Figure 7 for an example. In simulation
step t I the correct and all faulty circuits are initialized and the simulator switches from mode MB
to mode MX without losing accuracy. Notice that only the fault-free circuit must be initialized
to allow a switching from MBX to MX without a disadvantage.
The hybrid fault simulation procedure H-FS uses X-FS for reducing the space requirements of
B-FS and BX-FS. It profits from the fact that after some three-valued simulation steps, in most
cases the number of memory elements which are not initialized is greatly reduced. Consequently,
the space required by B-FS or BX-FS is reduced because the number of variables introduced to
encode the current state of the circuits is smaller. Noting that the space requirements may be
exponential in the number of variables introduced, the importance of the three-valued simulation
steps is obvious. Consequently, H-FS partially allows a symbolic fault simulation even for very
large circuits. This will be shown by experiments in Section 5.
4.2 Hybrid Fault Simulation with respect to rMOT and MOT
As mentioned in Section 2 there are stuck-at faults which cannot be detected by any fault
simulation based on SOT. However, as the example in Figure 2 shows, a fault may be detectable
by watching the output sequence for several time frames and applying the MOT strategy.
According to the definition of MOT in Section 2.2 it is necessary to compare sets of fault-free
responses with sets of responses obtained in the presence of faults. This is especially costly if
long test sequences and a large number of memory elements exist.
An elegant way to solve this task by using symbolic methods is presented in the sequel. We
define the MOT-detection function D MOT
f;Z (p; q) :=
Y
l
Y
for each fault f and test sequence Z. denote the state
variables for the initial state of the fault-free and faulty circuit, respectively.
f;Z compares all output sequences of the fault-free and faulty circuits simultaneously. As
long as there is an initial state p of the fault-free circuit that causes the same output sequence
(with respect to Z) as a faulty circuit with initial state q, the two circuits cannot be distinguished,
and D MOT
f;Z 6= 0. We conclude:
Lemma 4.1 A fault f is MOT-detectable by the input sequence Z iff
To illustrate the computation of D MOT
f;Z consider the circuit shown in Figure 2 again. For
the test sequence and the stuck-at 1 fault f indicated in the figure we obtain
f;Z
Consequently, the fault is MOT-detectable.
According to the definition of rMOT a "restricted" version of the MOT-detection function
f;Z is sufficient: The product has to be taken only over terms with i.e. we
obtain the rMOT-detection function D rMOT
defined by
f;Z (q) :=
Y
1-t-n;1-j-l
for each fault f and test sequence Z. We obtain a lemma analogous to that for MOT:
Lemma 4.2 A fault f is rMOT-detectable by the input sequence Z iff
Fault simulation with respect to rMOT and MOT is now realized during symbolic fault
simulation with B-FS by iteratively computing the detection functions D rMOT
f;Z and D MOT
f;Z ,
respectively. To do so, we consider the function Detect f;Z , which initially is set to the constant
function 1 and then incrementally "enlarged" to finally represent
f;Z or D MOT
f;Z . If in course
of the fault simulation process the i-th PO is reached during time frame t, the observability
of the activated fault f is checked and, depending on the current test strategy, Detect f;Z is
modified as follows:
Detect f;Z (p; q) / Detect f;Z (p; q) \Delta [o i (p;
1g. If Detect f;Z is evaluated to 0 the fault is marked as rMOT-detectable.
Note that the OBDD-representation for
already provided by the event-driven
SFP of B-FS.
Besides the correct output function we have to compute the faulty output function
Thus a second set of Boolean variables q for the memory elements of
the faulty circuit is required. Again we profit from the fact that the OBDD-representation
for
with the variable set by the event-driven SFP
of B-FS: We obtain
(q; t) from
by a compose operation on the corresponding
OBDDs, which basically replaces p i by q i for all i. This is much more efficient than computing
separately and thereby unnecessarily increasing the memory demand of the
OBDD-manager, since equivalent functions would have to be stored twice, once for each
variable set. Moreover, the SFP method cannot be applied directly. Finally, we compute
Detect f;Z (p; q) / Detect f;Z (p; q) \Delta [o i (p;
If Detect f;Z is evaluated to 0 the fault is marked as MOT-detectable.
Besides the usual advantages of a symbolic approach compared to explicit enumeration tech-
niques, the integration of the approach in the hybrid fault simulator H-FS allows the application
of the MOT strategy even to large circuits. If the space requirements of the symbolic fault simulation
exceed a given limit the hybrid fault simulator changes to the SOT strategy and works
as described in Section 4.1. After a few simulation steps using e.g. the three-valued logic, which
usually reduces the space requirements for the subsequent symbolic simulation, H-FS returns to
the MOT strategy again. In doing so, the detection function Detect f;Z has to be re-initialized
with the constant function 1.
Concerning efficiency and accuracy we want to make the following points: MOT works more
accurately than rMOT. On the other hand the space requirements of MOT is larger, because
MOT requires different variables for encoding the initial state of the fault-free and the faulty
circuit. rMOT uses the output value of the fault-free circuit only if it represents a constant
function. Therefore, no additional set of variables is used, and Detect f;Z usually is smaller. A
further important advantage of rMOT is that it allows the same test evaluation method as SOT
but achieves higher fault coverages. This means there is an advantage in fault coverage without
any drawback for test evaluation, as explained next.
c(n) be the output sequence which is obtained by applying Z to the circuit
under test. Then test evaluation requires the decision whether or not the circuit under test is
faulty.
In case of a test sequence which is determined with respect to SOT or rMOT the test
evaluation can easily be done: Only a single (partially defined) output sequence of the
fault-free circuit has to be compared with the output sequence of the circuit under test, i.e. the
circuit under test is declared faulty if there are t - n and i - l with
1g.
In case of a test sequence which is determined with respect to MOT the test evaluation is
more complicated. The implementation proposed in [27] requires checking whether the output
sequence contained in the set of output sequences caused by the different initial
states of the fault-free circuit. Since the number of output sequences may be exponential in
the number of memory elements, test evaluation may be very time-consuming. To reduce the
time requirements we propose the comparison of the sequence with the symbolic
representation of the fault-free output sequence. The comparison can be done by evaluating
step by step the product
Y
l
Y
If the result of this computation is 0 the circuit under test is faulty. Experimental results given
in the next section assure that this symbolic test evaluation in many cases requires very small
resources in both time and space.
5 Experimental Results
To investigate the performance of our approach we implemented hybrid fault simulation with
respect to SOT, rMOT and MOT in the programming language C++. The measurements were
performed on a SUN Ultra 1 Creator with 256 Mbytes of memory. For our experiments, we
considered the ISCAS-89 benchmark suite [4]. A space limit S max of 500,000 OBDD-nodes
(300,000 for the circuits s9234.1 and larger) was used to ensure that the procedures of H-FS
based upon OBDDs work efficiently. This number of nodes guarantees that small and mid-size
circuits can be simulated fully symbolically. For larger circuits we noticed that increasing the
node limit does not help to increase efficiency or accuracy. Therefore, a reduced number of
nodes guarantees that execution time otherwise wasted is saved.
We give a short overview on the sets of experiments performed. At first SOT fault coverage
and execution times for a number of benchmark circuits with respect to the deterministically
computed patterns of HITEC [25] (Table 3) is analyzed. Surprisingly, for some benchmarks,
mode MB is faster than both other modes. To explain these execution times, a closer look at
the number of gate evaluations during the simulation is taken in Table 4. The test patterns of
HITEC are determined based on BX . Thus, the special abilities of a symbolic simulation could
not be considered. To do so, test patterns of a symbolic ATPG tool [17] have been used for
Table
5. Table 6 summarizes the results for random patterns applied to circuits hard to initialize
with three-valued random pattern simulation. The graph shown in Figure 8 depicts the possible
large gap of fault coverage between the three simulation modes for one of these circuits.
Finally, Tables 7 and 8 collect the advantages of (r)MOT. The first table considers the
deterministic patterns of [17]. In the second table 500 random patterns are used. The MOT-
fault evaluation is considered in Table 9.
5.1 H-FS and SOT
Fault coverage [%] CPU time [sec]
Cct. jZj MX MBX MB MX MBX MB
s838.1 26 5.16 5.16 5.16 0.48 4.74 97.11
Table
3: SOT results for test sequences generated by HITEC.
evaluations
Cct. MX MBX MB
s386 4.81 8.04 6.42
s400 449.44 449.61 239.85
evaluations
Cct. MX MBX MB
s1494 193.24 194.82 41.15
Table
4: Number of gate evaluations for test sequences generated by HITEC.
Fault coverage [%] CPU time [sec]
Cct. jZj MX MBX MB MX MBX MB
s400 691 90.09 90.09 92.22 1.98 2.00 [1] 16.23
s953 196 8.34 25.58 99.07 5.72 25.58 [18] 2.96 [57]
Table
5: SOT results for deterministic patterns generated by Sym-ATGP.
The fault coverages and execution times for several benchmark circuits for deterministic test
sequences computed by HITEC [25] are shown in Table 3. jZj denotes the length of the test
sequence. Comparing the fault coverages determined by H-FS working in different modes we
observed that, as expected, the fault coverage determined in mode MB is higher than the lower
bounds determined in modes MX or MBX . The fault coverages determined by H-FS in modes
MX and MBX are equal for all circuits except two. This is not surprising, because using a
deterministic test sequence after a few input vectors the fault-free circuit is initialized and H-FS
automatically switches to mode MX . For MBX and MB the simulation step after which H-FS
works in mode MX due to the initialized state vectors is given in square brackets. Note that we
are now able to classify the accuracy of a fault simulation procedure based upon the three-valued
Fault Coverage [%] CPU time [sec.]
Cct. jZj MX MBX MB MX MBX MB
1000 0.00 21.45 100.00 6.40 172.99 79.44
s953 100 8.34 33.55 48.84 2.90 20.63 10.57
500 8.34 59.59 86.28 14.37 91.57 12.66
1000 8.34 62.56 90.92 30.65 172.88 14.14
500 59.67 59.77 59.90 13.32 27.73 70.91
s9234.1 100 5.28 5.47 5.47 30.00 415.87 1411.60
500 5.28 5.47 5.47 148.45 1583.88 4161.27
1000 5.37 5.56 5.64 300.07 3205.65 8453.75
500 8.74 12.23 12.55 386.17 6149.53 5855.76
1000 8.98 13.65 13.94 743.01 9161.44 8655.97
500 19.67 20.08 20.11 293.19 473.42 5055.48
1000 22.99 23.36 23.41 547.70 957.79 9665.74
500 3.53 4.99 4.99 835.36 9527.66 10137.90
s38584.1 100 26.86 28.66 28.82 317.16 484.93 1158.75
1000 52.08 52.37 52.49 1771.85 1976.83 5158.50
Table
results for random patterns and hard-to-initialize circuits.
logic by comparing the fault coverage determined in mode MX with the exact fault coverage
determined in mode MB . Such exact results which were obtained without a temporary change
to the three-valued logic during hybrid fault simulation are indicated by an '\Lambda'. For the first
time it is possible to show that for half of the benchmark circuits considered in Table 3 the exact
fault coverage with respect to the patterns from [25] is already computed by mode MX . For the
other half of the circuits, the gap between the exact fault coverage and the three-valued one is
very small.
Comparing the execution times, we observed that only one third of the simulations in mode
MB are considerably slower than those in modes MX or MBX . At first sight, it should generally
hold that MB is slower than MX and MBX . But for circuits s820, s832, s1488, and s1494,
mode MB is even faster than both other modes! This can be explained by Table 4, which shows
the number of gate evaluations performed during the simulation of the HITEC test sequences
given in units of 10 000 gate evaluations. For almost all circuits H-FS performs far fewer gate
evaluations in mode MB than working in the other modes, because besides the fault-free circuit
most of the faulty circuits are initialized during simulation. In mode MBX , H-FS performs the
Faults detected CPU time [sec]
Cct.
s953 1079 989 979 979 979 3.47 5.03 9.73
Table
7: Comparison of SOT with rMOT and MOT for patterns generated by Sym-ATPG.
largest number of gate evaluations due to the modified single-fault propagation. Of course, a
gate evaluation performed during an OBDD-based simulation is much more expensive than a
gate evaluation performed by a simulation based on the three-valued logic. But a smaller number
of gate evaluations can neutralize these more expensive OBDD evaluation costs. From this it
follows that mode MB can accelerate the fault simulation. Note that H-FS working in mode MX
is very fast for deterministic test sequences. In many examples its efficiency is approximately
comparable with that of fault simulators published in [3, 24, 12].
Since HITEC is based on BX , it is not surprising that the symbolic simulation modes of H-FS
do not provide an essential advantage. Therefore, in Table 5 test sequences generated by using
methods during ATPG ("Sym-ATPG") are considered [17]. The underlying (symbolic)
simulator used there is H-FS, as proposed here. Table 5 shows that for nearly all circuits mode
MB improves the fault coverage (up to 2.2%). Moreover, for some hard-to-initialize circuits
(s953, s510) the gap of fault coverage between mode MX , MBX , and MB is significantly higher.
Another important observation is that circuit s510 does not contain any fault that is redundant
with respect to SOT. Consequently, an application of the expensive multiple observation
time test strategy as proposed in [27] is not necessary. Likewise, a full-scan approach as proposed
in [11, 13] is also not necessary for reasons of fault coverage. However, in [13] the test length
for the full-scan version of s510 is 90 patterns, whereas here 245 patterns are necessary to also
achieve 100% fault coverage. In [11] 968 patterns have been computed for 100% fault coverage.
More known hard-to-initialize circuits are considered in Table 6. Moreover, in [16] it was
shown that some of these circuits cannot be initialized, not even symbolically. Random test
sequences have been used for the different modes of H-FS. For all circuits, H-FS increases the
fault coverage working in mode MB . For instance, consider the fault coverages obtained for
circuit s510. After simulating a random sequence of length 1000 we get a fault coverage of
100% again. This fault coverage for random patterns is much better than the fault coverage
determined by the ATPG procedure VERITAS [10] which only achieves a fault coverage of 93:3%
with a test length of 3027, possibly due to the non-symbolic fault simulation used there. The
Faults detected CPU time [sec]
Cct.
26 26 95.40 3656.37 76.59
s9234.1 6927 6561 13 19 19 4448.82 4487.38 3877.41
Table
8: Comparison of SOT with rMOT and MOT for 500 random patterns.
small difference in the execution times for circuit s510 for fault simulation with 500 and 1000
patterns is explained by the fact that during the simulation, after 553 input vectors the state
vector of the fault-free circuit and the state vectors of all faulty circuits were initialized and
H-FS continues to work in mode MX .
In contrast to other procedures using symbolic methods, we are also able to perform a more
accurate fault simulation for the largest benchmark circuits. For instance, using H-FS in mode
MB the fault coverage achieved for circuit s13207 is approximately 5% higher than using MX .
Furthermore, the table shows also a more general behavior of modes MBX and MB : They detect
faults sooner in the test sequence, see e.g. s510, s953, s38584.1. Thus, a given level of fault
coverage is obtained with far fewer random patterns.
For a direct comparison of the accuracy of H-FS working in different modes consider Figure 8.
It shows the fault coverage as a function of the test sequence length for circuit s953 depending
on the different modes. The resulting graph illustrates the gap between the exact fault coverage
determined in mode MB and the lower bounds computed in modes MX or MBX .
Cct. Max. product size CPU time [sec]
s13207.1 16926 9.36
s38584.1 173466 85.17
Table
9: Results for MOT test evaluation for 500 random patterns.
test sequence length
Mx
Mb
Mbx
Figure
8: Dependence of fault coverage on the working mode for circuit s953.
5.2 H-FS and rMOT and MOT
To compare the performance and the accuracy of the different observation strategies we performed
two sets of experiments. Firstly, we took the deterministic patterns (Table 7), already
used for Table 5. Secondly, randomly determined test sequences of length 500 were used (Ta-
ble 8). The experiments are separated into four parts: First, all three-valued SOT-detectable
faults are eliminated. Then, a symbolic random fault simulation based on the SOT, rMOT
and MOT strategies is performed. Note, that all three symbolic simulations and the initial
three-valued simulation use the same (randomly determined) test sequence. jF j denotes the
number of faults. jF u j denotes the number of faults that were not detected by the three-valued
fault simulation. For each strategy the results are given with respect to this set of remaining
faults. Again, exact computations are denoted by a '\Lambda'. Due to the OBDD-based simulation all
strategies permit a further classification of detectability of faults.
Obviously, MOT has no advantage over rMOT for the deterministic patterns (Table 7). This
becomes clear looking at the time steps, when the fault-free circuit is initialized (see Table 5,
Column 7). With an initialized fault-free circuit, there is no difference between MOT and
rMOT. However, this table shows, that even for patterns computed for a symbolic evaluation
the more advanced test strategies rMOT and MOT will increase the fault coverage with almost
no overhead for simulation time.
For the randomly determined patterns (Table 8), in general, fault simulation based on MOT
detects more faults than fault simulation based on rMOT, and rMOT detects more faults than
a SOT-based fault simulation. In all but eight examples we even succeeded in computing the
exact MOT fault coverage of the test sequences. On the other hand, in most cases where MOT
was not exact we succeeded at least in improving the accuracy compared to the three-valued
fault simulation and the SOT approach as well. For all but six circuits the rMOT strategy
computed the same fault coverage as the MOT strategy. However, for these six circuits MOT
does detect considerably more faults than rMOT.
Using rMOT instead of SOT also led to an improvement in execution time for a number of
circuits. For all other circuits, with exception of s526n, s3384, and s5378, the simulation time
of rMOT is about the same time as that of SOT. Although many OBDD-operations must be
performed for MOT, this strategy is faster than rMOT (SOT) for seven (eleven) circuits. In
general, this happens for circuits for which MOT computes a higher fault coverage than rMOT
(SOT) and due to earlier detection of the faults.
In order to investigate the space and time needed for the test evaluation of MOT, we measured
the maximal size of the symbolic output sequences evaluation product (see Section 4.2) and the
necessary execution time. The same 500 random patterns as used for the results of Table 8 have
been used. We considered the circuits for which the MOT strategy detects faults which cannot
be detected either by the SOT or the rMOT strategy. Additionally, to show the feasibility of
the MOT test evaluation the largest benchmark circuits are considered.
In order to estimate the maximum time needed for the test evaluation we computed a possible
test response of the fault-free circuit as follows: (1) Initialize the memory elements of the fault-free
circuit at the beginning of the simulation with random values. Then (2) simulate the test
sequence. Since the output sequence of the circuit under test is correct the test evaluation
does not terminate until the test sequence is fully evaluated. Also note that a test response of
a fault-free circuit under test requires the computation of the product of all symbolic output
values. The maximal size of this product is given in the table together with the execution time.
The experiments show that MOT-test evaluation can efficiently be performed in both time and
space.
6 Conclusions
In this paper we presented the hybrid fault simulator H-FS for synchronous sequential circuits.
It is able to automatically select between three different logics during simulation: the well known
three-valued logic, a Boolean function logic, and a mixed logic. Consequently, H-FS can profit
from the advantages which are offered by the different logics. On the one hand, it may use the
efficiency of the three-valued fault simulator, on the other hand, it may use the accuracy of the
simulator. Furthermore, the advantages of both strategies are combined in the
mixed-logic simulator.
Experiments have shown that H-FS is able to increase the fault coverage even for the largest
benchmark circuits. Of course, in some cases H-FS requires more time and space than a fault
simulator merely based upon the three-valued logic. On the other hand, the accuracy can be
considerably increased. Moreover, for many benchmark circuits it computes the exact fault
coverage, not known before.
The symbolic parts of the H-FS can be enhanced by the more advanced Multiple Observation
Time Test Method (MOT) with only a few extensions to the fault detection definition. This
results in a further improvement of fault coverage. Additionally, we showed that test evaluation
can also be performed efficiently for MOT. Moreover, using restricted MOT, which achieves the
same fault coverage as MOT for many circuits, the usual SOT-test evaluation method need not
be modified. Thus, evaluating a test sequence according to rMOT one obtains an advantage
without any drawbacks. Additionally, the fault simulation time with respect to rMOT is often
shorter than that with respect to SOT.
--R
On redundancy and fault detection in sequential circuits.
Digital Systems Testing and Testable Design.
FAST-SC: Fast fault simulation in synchronous sequential circuits.
Combinational profiles of sequential benchmark circuits.
Full symbolic ATPG for large circuits.
Accurate logic simulation in the presence of unknowns.
State assignment for initializable synthesis.
Redundancy identification/removal and test generation for sequential circuits using implicit state enumeration.
Synchronizing sequences and symbolic traversal techniques in test generation.
Advanced techniques for GA-based sequential ATPG
PARIS: a parallel pattern fault simulator for synchronous sequential circuits.
New techniques for deterministic test pattern generation.
Sequential circuit test generation using dynamic state traversal.
Sequentially untestable faults identified without search.
On the (non-) resetability of synchronous sequential circuits
Combining GAs and symbolic methods for high quality tests of sequential circuits.
Switching and Finite Automata Theory.
A hybrid fault simulator for synchronous sequential circuits.
Symbolic fault simulation for sequential circuits and the multiple observation time test strategy.
HOPE: An efficient parallel fault simulator for synchronous sequential circuits.
Partial reset: An inexpensive design for testability approach.
The sequential ATPG: A theoretical limit.
HITEC: A test generation package for sequential circuits.
The multiple observation time test strategy.
Fault simulation for synchronous sequential circuits under the multiple observation time testing approach.
On the role of hardware reset in synchronous sequential circuit test generation.
Fault simulation under the multiple observation time approach using backward implication.
A test cultivation program for sequential VLSI circuits.
On the initialization of sequential circuits.
--TR
Graph-based algorithms for Boolean function manipulation
The Multiple Observation Time Test
HOPE: an efficient parallel fault simulator for synchronous sequential circuits
Symbolic fault simulation for sequential circuits and the multiple observation time test strategy
Fault simulation under the multiple observation time approach using backward implications
CRIS
On the Role of Hardware Reset in Synchronous Sequential Circuit Test Generation
A Hybrid Fault Simulator for Synchronous Sequential Circuits
Sequentially Untestable Faults Identified Without Search ("Simple Implications Beat Exhaustive Search!")
Full-Symbolic ATPG for Large Circuits
On the Initialization of Sequential Circuits
Advanced Techniques for GA-based sequential ATPGs
Sequential Circuit Test Generation Using Dynamic State Traversal
On the (non-)resetability of synchronous sequential circuits
20.2 New Techniques for Deterministic Test Pattern Generation
--CTR
Martin Keim , Nicole Drechsler , Rolf Drechsler , Bernd Becker, Combining GAs and Symbolic Methods for High Quality Tests of Sequential Circuits, Journal of Electronic Testing: Theory and Applications, v.17 n.1, p.37-51, February 2001 | symbolic simulation;fault simulation;MOT;BDD;SOT |
337198 | Software evolution in componentware using requirements/assurances contracts. | In practice, pure top-down and refinement-based development processes are not sufficient. Usually, an iterative and incremental approach is applied instead. Existing methodologies, however, do not support such evolutionary development processes very well. In this paper, we present the basic concepts of an overall methodology based on component ware and software evolution. The foundation of our methodology is a novel, well-founded model for component-based systems. This model is sufficiently powerful to handle the fundamental structural and behavioral aspects of component ware and object-orientation. Based on the model, we are able to provide a clear definition of a software evolution step.During development, each evolution step implies changes of an appropriate set of development documents. In order to model and track the dependencies between these documents, we introduce the concept of Requirements/Assurances Contracts. These contracts can be rechecked whenever the specification of a component evolves, enabling us to determine the impacts of the respective evolution step. Based on the proposed approach, developers are able to track and manage the software evolution process and to recognize and avoid failures due to software evolution. A short example shows the usefulness of the presented concepts and introduces a practical description technique for Requirements/Assurances Contracts. | INTRODUCTION
Most of today's software engineering methodologies are
This paper originates from the research in the project A1
\Methods for Component-Based Software Engineering" at the
chair of Prof. Dr. Manfred Broy, Institut fur Informatik,
Technische Universitat Munchen. A1 is part of \Bayerischer
Forschungsverbund Software-Engineering" (FORSOFT) and supported
by Siemens AG, Department ZT.
based on a top-down development process, e.g., Object
Modeling Technique (OMT) [27], Objectory Process
[15], or Rational Unied Process (RUP) [14]. All
these methodologies share a common basic idea: During
system development a model of the system is built
and stepwise rened. A renement step adds additional
properties of the desired system to the model. At last
the model is a su-ciently ne, consistent, and correct
representation of the system under consideration. It
may be implemented by programmers or even partly
generated. Surely, all of these processes support local
iterations, for instance the RUP allows iterations during
analysis, design or implementation. However, the
overall process is still based on renement steps to improve
the specication model and nally end with the
desired system. In formal approaches, like ROOM [3] or
Focus [4] the concept of renement is even more strict.
These kinds of process models involve some severe draw-
backs: Initially, the customer often does not know all
relevant requirements, cannot state them adequately,
or even states inconsistent requirements. Consequently,
many delivered systems do not meet the customer's ex-
pectations. In addition, top-down development leads
to systems that are very brittle with respect to changing
requirements, because the system architecture and
the involved components are specically adjusted to the
initial set of requirements. This is in sharp contrast to
the idea of building a system from truly reusable com-
ponents, as the process does not take already existing
components into account. Beyond this, software maintenance
and life-cycle are not supported. This is extreme
critical as, for instance, nowadays maintenance
takes about 80 percent of the IT budget of Europe's
companies in the average, and 20 percent of the user
requirements are obsolete within one year [21].
However, software evolution as a basic concept is currently
not well supported. In our opinion, this is partly
due to the lack of a suitable overall componentware
methodology with respect to software evolution. Such
a methodology should at least incorporate the following
parts [26]:
The common system model provides a well-
dened conceptual framework for componentware
and software evolution is required as a reliable foundation
Based on the system model a set of description
techniques for componentware are needed. Developers
need to model and document the evolution of
a single component or a whole system.
Development should be organized according to
a software evolution process. This includes
guidelines for the usage of the description techniques
as well as reasonable evolution steps.
To minimize the costs of software evolution, systems
should be based on evolution-resistant ar-
chitectures. Such architectures contain a common
basic infrastructure for components, like
DCOM [2], CORBA [22], or Java Enterprise
Beans [16]. But even more important are business-oriented
standard architectures, that are evolution-
resistant.
At last, all former aspects should be supported by
tools.
The contribution of this work can be seen from two different
perspectives. From the viewpoint of specica-
tion methods, it constitutes a sophisticated basic system
model as solid foundation for new techniques in
the areas of software architectures, componentware, and
object-orientation. From a software engineering per-
spective, it provides a clear understanding of software
evolution steps in an evolutionary development process.
Moreover, it oers a new description technique, called
Requirements/Assurances Contracts. These contracts
can be rechecked whenever the specication of an component
evolves. This allows us to determine the impacts
of the respective evolutionary step.
The paper is structured as follows. Section 2 provides
the basic denitions to model dynamics in a component-based
system. In the next section, Section 3, we specify
the observable behavior of an entire component-based
system based on former denitions. In Section 4, we
provide a composition technique that enables us to determine
the behavior of the system from the behavior
of its components. Section 5 will complete the formal
model with a simple concept of types. These types are
described by development documents. Section 6 introduces
our view of development documents and evolution
steps on those documents. In Section 7 we present
the concept of Requirements/Assurances Contracts to
model explicitly the dependencies between development
documents. Section 8 provides a small example to show
the usefulness of the proposed concepts in case of software
evolution. A short conclusion ends the paper.
This section elaborates the basic concepts and notions
of our formal model for component-based systems. The
system model incorporates two levels: The instance-level
represents the individual operational units of a
component-based system that determine its overall be-
havior. We distinguish between component, interface,
connection, and variable instances. We dene a number
of relations and conditions that model properties of
those instances. The type-level contains a normalized
abstract description of a subset of common instances
with similar properties.
Although some models for component-based and object-oriented
systems exist, we need to improve them for an
evolutionary approach. Formal models, like for instance
Focus [4] or temporal logic [17], are strongly connected
with renement concepts (cf. Section 1). Furthermore,
these methods do not contain well elaborated type concepts
or sophisticated description techniques, that are
needed to discuss the issues of software evolution (as in
case of evolution the types and the descriptions are usually
evolved). Moreover, in practice formal methods are
not applicable, since formal models are too abstract and
do not provide a realistic view on today's component-based
systems.
Architectural description languages, like MILs, Rapide,
Aesop, UniCon, are other, less formal approaches. As
summarized in [5] they introduce the concepts of components
and communication between them via connec-
tors, but do not consider all behavior-related aspects
of a component system. In a component-based system
behavior is not limited to the communication between
pairs of components, but also includes changes to the
overall connection structure, the creation and destruction
of instances, and even the introduction of new types
at runtime. In the context of componentware and software
evolution, these aspects are essential because dynamic
changes of a system may happen both during its
construction at design-time as well as during its execution
at runtime, either under control of the system itself
or initiated by human developers.
Other approaches, like pre/post specications cannot
specify mandatory external calls that components must
make. This restriction also applies to Meyer's design
by contract [20] and the Java Modeling Language
(JML) [18], although they are especially targeted at
component-based development.
int =7
String
int i =5
Figure
1: A Component System: Behavioral Aspects
For that reason, we elaborated a novel, more realistic
model. We claim, that the presented formal model is
powerful enough to handle the most di-cult aspects of
component-based systems (cf. Figure 1): dynamically
changing structures, a shared global state, and at last
mandatory call-backs. Thus, we separate the behavior
of component-based systems into these three essential
parts:
Structural behavior captures the changes in the
system structure, including the creation or deletion
of instances and changes in the connection as well
as aggregation structure.
Variable valuations represent the local and
global data space of the system. This enables us
to model a shared global state.
Component communication describes message-based
asynchronous interaction between compo-
nents. Thus, we can specify mandatory call-backs
without problems.
In the following sections we rst come up with de-
nitions for these three separate aspects of behavior in
component-based systems.
Components are the basic building blocks of a
component-based system. Each component possesses
a set of local attributes, a set of sub-components, and a
set of interfaces. Interfaces may be connected to other
interface via connections. During runtime some of these
basic building blocks are created and deleted.
In order to uniquely address the basic elements of a
component-based system, we introduce the disjoint sets:
VARIABLES ID.
As
Figure
1 shows, a component-based system may
change its structure dynamically. Some of these basic
elements may be created or deleted (ALIVE). New interfaces
may be assigned to components (ASSIGNED).
Interfaces may be connected to or de-connected from
other interfaces (CONNECTED). New Subcomponents
may be aggregated by existing parent-components
(PARENT). The following denitions cover the structural
behavior of component-based systems:
Note, that this approach is strong enough to handle not
only dynamic changing connections structures in systems
but also mobile systems as, for instance it covers
mobile components that migrate from one parent component
to another (PARENT).
Usually, the state space of a component-based system
is not only determined by its current structure but also
by the values of the component's attributes (cf. Figure
1). With VALUES the set of all possible valuations
for attributes and parameters are denoted. They are
in essence mappings of variables (attributes, parame-
ters, etc.) to values of appropriate type (VALUATION).
These variables belong to components, characterizing
the state of the component (ALLOCATION). The
following denitions cover the variable valuations of
component-based systems:
Later on we will allow components to change the values
of other component's variables (cf. Section 4). Thus,
we can model shared global states as well-known from
object-oriented systems. Note, we do not elaborate on
the underlying type system of the variables and values
here, but assume an appropriate one to be given.
COMPONENT COMMUNICATION
Based on existing formal system models, e.g. Focus [4],
sequences of messages represent the fundamental units
of communication. In order to model message-based
communication, we denote the set of all possible messages
with M, and the set of arbitrary nite message
sequences with M . Within each time interval components
resp. interfaces receive message sequences arriving
at their interfaces resp. connections and send message
sequences to their respective environment, as given by
the following denition (cf. Figure 1):
The used message-based communication is asyn-
chronous, like CORBA one-way calls. Hence, call-backs
based on those asynchronous one-way calls can be explicitly
specied within our model. But one cannot
model \normal" blocking call-backs as usual in object-oriented
programming languages. However, our observation
shows, call-backs need not to be blocking calls.
Often call-backs are used to make systems extensible.
In layered system architectures they occur as calls from
lower into higher layers in which the are known as
up-calls. These up-calls are usually realized by asynchronous
events (cf. the Layers Pattern in [9]). Another
representative application of call-backs as asynchronous
events is the Observer Pattern [11]. There
the observer may be notied via asynchronous events
if the observed object has changed. To sum up, we
believe call-backs as supported in our model are powerful
enough to model real component-based systems under
the assumption that a middleware supporting asynchronous
message exchange is available.
SYSTEM SNAPSHOT
Based on all former denitions we are now able to characterize
a snapshot of a component-based system. Such
a snapshot captures the current structure, variable val-
uation, and actual received messages. Let SNAPSHOT
denote the type of all possible system snapshots:
CONNECTED PARENT ALLOCATION
VALUATION EVALUATION
Let SYSTEM denote the innite set of all possible sys-
tems. A given snapshot snapshot s 2 SNAPSHOT of
a system s 2 SYSTEM 1 is tuple that capture the current
active sets of components, interfaces, connections,
1 In the remainder of this paper we will use this shortcut.
Whenever we want to assign a relation X to a system s 2 SYSTEM
(component c 2 COMPONENT) we say X s
and variables, the current assignment of interfaces to
components, the current connection structure between
interfaces, the current super-/sub-component relation-
ship, the current assignment of variables to components,
the current values of components, and nally the current
messages for the components.
Similar to related approaches [4], we regard time as an
innite chain of time intervals of equal length. We use
N as an abstract time axis, and denote it by T for clar-
ity. Furthermore, we assume a time synchronous model
because of the resulting simplicity and generality. This
means that there is a global time scale that is valid for all
parts of the modeled system. We use timed streams, i.e.
nite or innite sequences of elements from a given do-
main, to represent histories of conceptual entities that
change over time. A timed stream (more precisely, a
stream with discrete time) of elements from the set X is
an element of the type
with Nnf0g. Thus, a timed stream maps each
time interval to an element of X. The notation x t is
used to denote the element of the valuation x 2 X T at
Streams may be used to model the behavior of sys-
tems. Accordingly, SNAPSHOT T is the type of all system
snapshot histories or simply the type of the behavior
relation of all possible systems:
CONNECTED T PARENT T ALLOCATION T
VALUATION T EVALUATION T
Let Snapshot T
s SNAPSHOT T be the behavior a sys-
tem. A given snapshot history snapshot s 2 Snapshot T
s
is a timed stream of tuples that capture the changing
snapshots snapshot t
s
Obviously, a couple of consistency conditions can be dened
on such a formal behavior specication Snapshot T
s .
For instance, we may require that all assigned interfaces
are assigned to an active component:
assigned t
s
alive t
s
Furthermore, components may only be connected via
their interfaces if one component is the parent of the
other component or if they both have the same parent
component. Connections between interfaces of the same
component are also valid:
connected t
s
assigned t
s
s
parent t
s
s
parent t
s
s
(b)
We can imagine an almost innite set of those consistency
conditions. A full treatment is beyond the
scope of this paper, as the resulting formulae are rather
lengthy. A deeper discussion of this issue can be found
in [1].
In the previous sections we have presented the observable
behavior of a component-based system. This behavior
is a result of the composition of all component
behaviors. To show this coherence we rst have to provide
behavior descriptions of a single component. In
practice are transition-relations an adequate behavior
description technique. In our formal model we use a
novel kind of transition-relation: In contrast to \nor-
predecessor
state and successor state|the presented transition
relation is a relation between a certain part of the
system-wide predecessor state and a certain part of the
wished system-wide successor state:
Let behavior c BEHAVIOR be the behavior of a component
COMPONENT. The informal meaning of each
tuple in behavior c is: If the specied part of the system-wide
predecessor state ts (given by the rst snapshot),
the component wants the system to be in the system-wide
successor-state in the next step (given by the second
snapshot). Consequently we need some specialized
runtime system that collects at each time step from all
components all wished successor states and composes a
new well-dened successor state for the whole system.
The main goal of such a runtime system is to determine
the system snapshot snapshot t+1
s
from the snapshot
snapshot t
s
and the set of behavior relations behavior c of
all components. In essence, we can provide a formulae
to calculate the system behavior from the initial conguration
snapshot 0
s
, the behavior relations behavior c ,
and external stimulations via messages at free inter-
are interfaces that are not
connected with other interfaces and thus can be stimulated
from the environment.
First we have to calculate all transition-tuples of all active
components:
behavior t
s
alive t
s
true
behavior c
Now, we can calculate all transition-tuples of the active
components that t the actual system state. Let
transition t
s be the set of all those transition-tuples that
could re:
transition t
s
s
s
Before we can come up with the nal formulae for the
calculation of the system snapshot snapshot t+1
s
we need
a new operator on relations. This operator takes a relation
X and replaces all tuples of X with tuples of Y if
the rst element of both tuples is equal
At last, we are now able to provide the complete formulae
to determine the system snapshot snapshot t+1
s
snapshot t+1
s
(alive t+1
s
assigned t+1
s
connected t+1
s
parent t+1
s
allocation t+1
s
alive t+1
s
alive t
s
assigned t+1
s
assigned t
s
connected t+1
s
connected t
s
parent t+1
s
parent t
s
allocation t+1
s
allocation t
s
valuation t+1
s
s
evaluation t+1
s
evaluation t
s
Intuitively spoken, the next system snapshot
(snapshot t+1
s
) is a tuple. Each element of this tuple,
for instance alive t+1
s
, is a function, that is determined
simply by merging the former function (alive t
s
) and the
\delta-function" 8 (transition t
s
This \delta-function"
includes all \wishes" of all transition-relations that re.
The basic concepts and their relations as covered
in the previous sections provide mathematical
denitions for the constituents of a component-based
system at runtime. However, in order to
present an adequate model useful for practical de-
velopment, we introduce the concept of a type.
CONNECTION TYPE [ VARIABLES TYPE TYPE
be the innite set of all types. A type models all common
properties of a set of instance in an abstract way.
TY PE OF assigns to each instance (component, inter-
face, connection, and variables) its corresponding type:
Let PREDICATE be the innite set of all predicates
that might ever exist. Predicates (boolean expres-
sions) on a type are functions from instances of this
type to BOOLEAN. For instance, for the component
behavior c is a predicate
on the type of c. This is one of the simplest predicate
we can imagine. It provides a direct mapping from
the type-level to the instance-level. The predicate is
true, if the arbitrary transition is part of component
the behavior. Now, we can dene functions that provide
an abstract description for all existing types 3 :
2 The \standard" notation denotes the set of
m-tuples as a result of the projection of the relation R of arity r
onto the components
3 P(A) denotes the powerset of the set A.
6 SOFTWARE EVOLUTION
Usually, during the development of a system, various development
documents are created. These development
documents are concrete descriptions, in contrast to the
abstract descriptions linked to types as discussed in the
last sections. Such a development document is separate
unit that describes a certain aspect of, or \view" on
the system under development. In componentware we
typically have the following kinds of documents:
Structural Documents describe the internal
structure of a system or component. The structure
of a component consists of its subcomponents
and the connections between the subcomponents
and with the supercomponent, e.g. aggregation or
inheritance in UML Class Diagrams [23] or architecture
description languages [5].
Interface Documents describe the interfaces of
components. Currently most interface descriptions
(e.g. CORBA IDL [24]) only allow one to specify
the syntax of component interfaces. Enhanced descriptions
that also capture behavioral aspects use
pre- and post-conditions, e.g. Eiel [20] or the Java
Modeling Language [18].
Protocol Documents describe the interaction between
a set of components. Typical interactions
are messages exchange, call hierarchies, or dynamic
changes in the connection structure. Examples of
protocol descriptions are: Sequence Diagrams in
UML [23], Extended Event Traces [6], or Interaction
Interfaces [7].
Implementation Documents describe the implementation
of a component. Program code is the
most popular kind of those descriptions, but we can
also use automatons, like in [28, 12] or some kind
of greybox specications [8]. Especially in componentware
the implementation of a component can
be (recursively) described by a set of structural, in-
terface, protocol, and implementation documents.
During development we describe a system|or more exactly
the types of the system|by sets of those docu-
ments. Let DOC be the innite set of all possible doc-
uments. Each type of a component-based system is described
by a set of those development documents:
The semantics of a given set of development documents
is simply a mapping from this set of documents to a
set of predicates. Thus, we can dene a semantic function
which assigns to a given set of documents a set of
properties characterizing the system:
The semantic mapping from the concrete descriptions
of a system (doc s DOC) into a set of predicates is
correct, if these predicates are equal with the predicates
of the abstract description of each t 2 TYPE. More for-
mally, the semantic mapping is correct if the following
Documents
sem (doc )
sem (doc )s
s
Figure
2: Software Evolution during System Development
condition holds:
sem s (described by s description s (t)
As already discussed in Section 1, the ability for software
to evolve in a controlled manner is one of the most
critical areas of software engineering. Developers need
support for an evolutionary approach. Based on the semantic
function SEM, we are able to formulate the concept
of an evolution step. Figure 2 shows three typical
evolution steps during system development. An evolution
step in our sense causes changes in the set of development
documents within a certain time step as given
by the functions of type EVOLVE:
We call an evolution step of a set of documents doc s
renement, if the condition
sem s (evolve s (doc s
abstraction, if the condition
sem s (doc s ) sem s (evolve s (doc s
strict evolution, if the condition
sem s (doc s ) 6 sem s (evolve s (doc s ))^
sem s (evolve s (doc s
sem s (doc s ) \ sem s (evolve(doc s
total change, if the condition
sem s (doc s ) \ sem s (evolve s (doc s holds.
Obviously, we should pay the most attention to the
strict evolution. In the remaining paper we use evolution
and strict evolution as synonymous, unless if
we explicitly distinguish the various kinds of evolution
steps. A more detailed discussion about the dierences
between evolution and renement steps can be found
in [26].
CONTRACTS
If a document changes via an evolution step, the consequences
for documents that rely on the evolved document
are not clear at all. Normally, the developer
who causes the evolution step has to check whether the
other documents are still correct or not. As the concrete
dependencies between the documents are not explicitly
formulated, the developer has usually to go into
the details of all concerned documents. For that reason
we claim that an evolution-based methodology must be
able to model and track the dependencies between the
various development documents.
To reach this goal we have to make the dependencies between
the development documents more explicit. Cur-
rently, in description techniques or programming languages
dependencies between dierent documents can
only be modeled in an extremely rudimentary fashion.
For instance, in UML [23] designers can only specify the
relation uses between documents or in Java [10] programmers
have to use the import statement to specify
that one document relies on another.
Surely, more sophisticated specication techniques ex-
ist, e.g. Evolving Interoperation Graphs [25], Reuse
Contracts [29, 19], or Interaction Contracts [13]. Evolving
Interoperation Graphs provide a framework for
change propagation if a single class changes. These
graphs take only into account the syntactical interface
of classes and the static structure (class hierarchy) of
the system, but not the behavioral dependencies.
Reuse Contracts address the problem of changing implementations
of a stable abstract specication. There,
evolution con
icts in the scope of inheritance are dis-
cussed, but not con
icts in component collaborations.
This might be helpful to predict the consequences of
evolving a single component, but the eects for other
components or the entire system are not clear at all.
Finally, Interaction Contracts are used to specify the
collaborations between objects. Although the basic idea
of interaction contracts|to specify the behavioral dependencies
between objects|seems to be a quite good
suggestion, this approach takes neither evolution nor
componentware su-ciently into account. Interaction
contracts strongly couple the behavior specication of
the component seen as an island and the behavioral dependencies
to other components. Hence, the impacts of
an evolutionary step can not be determined.
Contract B
Contract A
Contract B
Contract A
A
A
Assurances
Requirements
Figure
3: Requirements/Assurances Contracts between Development Documents of Component Types
To avoid these drawbacks and support an evolution-
based development process at the best, we propose to
decouple the component island specication from the behavioral
dependencies specication. The following two
types of functions allow us to determine the behavioral
specication of a single component seen as an island:
Intuitively, a function requires s 2 REQUIRES calculates
for a given set of documents doc s 2 P(DOC)
the set of predicates the component type ct 2
COMPONENT TYPE expects from its environment.
The function assures s 2 ASSURES calculates the set
of predicates the component type provides to its environment
We need specialized description techniques to model the
required and assured properties of a certain component
explicitly within this development document. Such description
techniques must be strongly structured. They
should have at least two additional parts capturing the
set of required and assured properties (cf. Figure 3):
Requirements: In the requirements part the designer
has to specify the properties the component
needs from its environment.
Assurances: In the assurances part the designer
describes the properties the component assures to
its environment, assuming its own requirements are
fullled.
Once these additional aspects are specied (formally
given by the functions requires s and assures s ), the designer
can explicitly state the behavioral dependencies
between the components by specifying for each component
the assurances that guarantee the requirements.
We call such explicit formulated dependencies Require-
ments/Assurances Contracts (r/a-contracts). Figure 3
illustrates the usage of those contracts. The three development
documents include the additional requirements
(white bubble) and assurances (black bubble) parts. Developers
can explicitly model the dependencies between
the components by r/a-contracts shown as double arrowed
lines. Formally a r/a-contract is a mapping between
the required properties of a component and the
assured properties of other components:
For a given contract contract s 2 CONTRACT the predicate
fulfilled s 2 FULFILLED holds, if all required properties
of a component are assured by properties of other
components:
fulfilled s (ct)(requires s (ct)(described by s (ct))) ()
(ct)(described by s (ct))g
(x)(described by s (x))g
In the case of software evolution the designer or
a tool has to re-check whether requirements of
components, that rely on the assurances of the
evolved component, are still guaranteed. Formally
the tool has to re-check whether the predicate
fulfilled s (ct)(requires s (ct)(evolve s (described by s (ct))))
still holds.
For instance, in Figure 3 component C has changed over
time. The designer has to validate whether Contract B
still holds. More exactly, he or she has to check whether
the requirements of component C are still satised by the
assurances of component A or not.
The advantages of r/a-contracts come only fully to validity
if we have adequate description techniques to
specify the requirements and assurances of components
within development documents. In the next section we
provide a small sample including some simple description
techniques to prove the usefulness of r/a-contracts.
To illustrate the practical relevance of the proposed r/a-
contracts we want to discuss a short example. Consider
a windows help screen as shown in Figure 4. It contains
two components: a text box and a list box control
element. The content of the text box restricts the presented
help topics in the list box. Whenever the user
changes the content of the text box|simply by adding
a single character|the new selection of help topics is
immediately presented in the list box.
Component HelpText
Component HelpList
Figure
4: A Short Sample: Windows Help Screen
A simple implementation of such a help screen may contain
the two components HelpText and HelpList. The
collaboration between these two components usually follows
the Observer Pattern [11]. In the case of an \ob-
servable" component (HelpText) changing parts of its
state, all \observing" components (HelpList) are notied
Components in a system often evolve. To make the windows
help screen more evolution resistant, one should
specify the help screen in a modular fashion. Thus, we
use two dierent kinds of descriptions as proposed in
Section 7:
Descriptions of the behavior of a single component
seen as an island start with COMPONENT and
descriptions of the behavioral dependencies between
components start with
RA-CONTRACT.
In the example description technique we use, keywords
are written with capital letters. Each component island
specication consists of two parts in the specication:
The rst part is the REQUIRES part containing all interfaces
the component needs. For each interface the
required predicates (syntax and behavior) are explicitly
specied. The second part is the ASSURES part capturing
all interfaces the component provides to its environ-
ment. For each interface the assured predicates (again
syntax and behavior) are explicitly described.
The notation and semantic within these parts is equal to
the one used for the interaction contracts [13]. The language
only supports the actions of sending a message M
to a component C, denoted by C !M , and change of a
value v, denoted by v. The ordering of actions can be
explicitly given by the operator \;", an IF-THEN-ELSE
construct, or be left unspecied by the operator k. The
language also provides the construct ho for the
repetition of an expression e separated by the operator
for all variables v which satisfy c.
Now, we can start out with a textual specication of
the requirements and assurances of the two components
HelpText and HelpList|the components island spec-
ication:
COMPONENT HelpText
REQUIRES INTERFACE Observer
WITH METHODS
ASSURES INTERFACE TextBox
WITH LOCALS
WITH METHODS
The component HelpText requires an interface supporting
the method update():void. Note that, in the context
of this specication the required interface is named
Observer. This represents neither a global name nor a
type of the required interface. Later, we can explicitly
model the mapping between the various required and
assured interface and method names via the proposed
r/a-contracts. Additionally, the component HelpText
assures an interface TextBox with the two methods
getText():String and addText(t:String):void.
When addText(t) is called the method update() is invoked
for all observers.
Correspondingly, the component HelpList requires an
interface named Observable that includes the method
getText():String. Moreover, whenever the return
value of getText() changes, the update() method of
the component HelpList has to be called via the interface
ListBox. This is the basic behavior requirement
the component HelpList needs to be assured by its environment
COMPONENT HelpList
REQUIRES INTERFACE Observable
WITH METHODS
WITH INVARIANTS
ASSURES INTERFACE ListBox
WITH LOCALS
WITH METHODS
Now, we can specify two r/a-contracts: One to satisfy
the requirements of the component HelpList and
the other for the requirements of component HelpText.
Such a contract contains two sections: The rst sec-
tion, the INSTANTIATION, declares the participants of
the contract and their initial conguration. For in-
stance, in the contract HelpListContract are two participants
hl:HelpList and ht:HelpText instantiated
and the initial connection between both is established.
Note, the variables declared in the instanciation section
are global identiers, as one must be able to refer them
in the current contract as well as in other contracts.
The second section, the PREDICATE MAPPING, maps the
required interfaces to assured interfaces of the partic-
ipants. Additionally, it contains the most important
part of the contract: the \proof". There, the designer
has to validate the correctness of the contract, means
he or she has to proof whether the syntax and behavior
of the requirements/assurance pair ts together. The
contract HelpListContract includes a proof. It simply
starts with conjunction of all assured predicates of the
interface ht.TextBox and has to end with all required
predicates of the interface hl.Observable:
RA-CONTRACT HelpListContract
INSTANTIATION
HelpList
MAPPING: REQUIRED hl.Observable
ASSURED BY ht.TextBox
RA-CONTRACT HelpTextContract
INSTANTIATION
MAPPING: REQUIRED ht.Observer
ASSURED BY hl.ListBox
proof is omitted
Once the windows help screen is completely specied
and implemented, it usually takes a couple of months
until one of the components appears in a new, improved
version. In our example, the new version of the component
HelpText has been evolved. The new version
assures an additional method addChar(c:Char):void.
For performance reasons, this method does not guarantee
that the observers are notied if the method is
invoked:
COMPONENT HelpText
REQUIRES INTERFACE Observer
WITH METHODS
ASSURES INTERFACE TextBox
WITH LOCALS
WITH METHODS
The assurances part in the specication of the component
HelpText has changed. Therefore, the designer
or a tool should search for all r/a-contracts where
HelpText is used to fulll the requirements of other
components. Once, all of these contracts are identied,
the corresponding proofs have to be re-done. In our
example the contract HelpListContract is concerned.
The designer has to re-check whether the goal ht !
can be reached. But the premises have been changed.
Obviously the goal cannot be derived, as a call of
addChar(c) changes the return value of the method
getT ext() but does not result in an update() for the
HelpList|whenever the text in HelpText changes
update() is called|are no longer satised by the new
component HelpText. The current design of the system
may not longer meet the expectations or the re-
quirements. Now, the designer can decide to keep the
former component in use or to realize a workaround in
the HelpList component. However, this is outside the
scope of the discussed concepts.
9 CONCLUSION AND FUTURE WORK
The ability for software to evolve in a controlled manner
is one of the most critical areas of software engineer-
ing. Therefore, a overall evolution-based development
methodology for componentware is needed. In this paper
we have outlined a well-founded common system
model for componentware that copes with the most difcult
behavioral aspects in object-orientation or com-
ponentware: dynamicall changing structures, a shared
global state, and nally mandatory call-backs. The
model presented includes the concepts of a type and abstract
as well as concrete descriptions for types. During
system development a set of those descriptions are cre-
ated. Software evolution means that these descriptions
are changed over time. Thus, we need techniques to determine
the impacts of the respective evolution steps.
With the presented requirements/assurances-contracts
developers can explicitly model the dependencies between
the dierent components. Whenever a component
or the entire system changes the contracts show
the consequences for other components. Contracts help
the developer to manage the evolution of the complete
system.
A number of additional issues remain items of future
work: We are currently working on a rst prototype
runtime environment for the presented system model.
We still have to elaborate on the underlying type sys-
tem. Addionally, we have to provide more sophisticated
graphical description techniques based on UML and
OCL (structural documents, interface documents, protocol
documents, and implementation documents). A
complete development example will show these description
techniques in practice. For each of those description
techniques a clear semantical mapping into the system
model has to be dened. Additionally, syntax compatible
checkers, theorem prover, and model checker could
be included to run the correctness proof for evolution
steps semi-automatically or even full-automatically. Fi-
nally, we have to develop tool support and provide a set
of evolution-resistant architectures based on technical
componentware infrastructures like CORBA, DCOM, or
Java Enterprise Beans.
ACKNOWLEDGEMENTS
I am grateful to Klaus Bergner, Manfred Broy, Ingolf
Kruger, Jan Philipps, Bernhard Rumpe, Bernhard
Schatz, Marc Sihling, Oskar Slotosch, Katharina Spies,
and Alexander Vilbig for interesting discussions and
comments on earlier versions of this paper.
--R
A formal model for componentware.
The design of distributed systems - an introduction to FOCUS
Interaction Interfaces - Towards a scienti c foundation of a methodological usage of Message Sequence Charts
Java in a Nutshell.
Design Patterns: Elements of Reusable Object-Oriented Soft- ware
On Visual Formalisms.
Specifying Behavioral Compositions in Object-Oriented Sys- tems
The Uni
The temporal logic of actions.
Preliminary design of JML: A behavioral interface speci
Managing software evolution through reuse contracts.
500 Europa: Der Club der Innovatoren.
Client/Server Programming with Java and CORBA.
Modeling Software Evolution by Evolving Inter-operation Graphs
Executive Summary: Software Evolution in Componentware - A Practical Approach
Formale Methodik des Entwurfs verteilter ob- jektorientierter Systeme
Reuse Contracts: Managing the Evolution of Reusable Assets.
--TR
On visual formalisms
Contracts: specifying behavioral compositions in object-oriented systems
Object-oriented software engineering
Object-oriented modeling and design
Real-time object-oriented modeling
The temporal logic of actions
Design patterns
Reuse contracts
Pattern-oriented software architecture
Client/server programming with Java and CORBA
Object-oriented software construction (2nd ed.)
The unified software development process
Software Change and Evolution
Using Extended Event Traces to Describe Communication in Software Architectures
Interaction Interfaces - Towards a Scientific Foundation of a Methodological usage of Message Sequence Charts
A Plea for Grey-Box Components | formal methods;contracts;componentware;software evolution;software architecture;description techniques;object-orientation |
337205 | Deriving test plans from architectural descriptions. | INTRODUCTION
In recent years the focus of software engineering is continuosly
moving towards systems of larger dimensions
and complexity. Software production is becoming more
and more involved with distributed applications running
on heterogeneous networks, while emerging technologies
such as commercial o-the-shelf (COTS) products are
becoming a market reality [22]. As a result, applications
are increasingly being designed as sets of autonomous,
decoupled components, promoting faster and cheaper
system development based on COTS integration, and
facilitating architectural changes required to cope with
the dynamics of the underlying environment.
The development of these systems poses new challenges
and exacerbates old ones. A critical problem is understanding
if system components integrate correctly. To
this respect the most relevant issue concerns dynamic
integration. Indeed, component integration can result in
architectural mismatches when trying to assemble components
with incompatible interaction behavior [12, 7],
leading to system deadlocks, livelocks or in general failure
to satisfy desired functional and non-funtional system
properties.
In this context Software Architecture (SA) can play a
signicant role. SAs have in the last years been con-
sidered, both by academia and software industries, as a
means to improve the dependability of large complex
software products, while reducing development times
and costs [21, 3]. SA represents the most promising approach
to tackle the problem of scaling up in software
engineering, because, through suitable abstractions, it
provides the way to make large applications manage-
able. The originality of the SA approach is to focus on
the overall organization of a large software system (the
using abstractions of individual components. This
approach makes it possible to design and apply tractable
methods for the development, analysis, validation, and
maintenance of large software systems.
A crucial part of the development process is testing.
While new models and methods have been proposed
with respect to requirements analysis and design, notably
UML [18], how to approach the testing of these
kinds of systems remains a neglected aspect. The paradox
is that these new approaches specically address the
design of large scale software systems. However, for such
systems, the testing problems not only do not diminish,
but are intensied. This is especially true for integration
testing. In fact, due to the new paradigms centered
on component-based assembly of systems, we can easily
suppose a software process in which unit testing plays
a minor role, and testers have to focus more and more
on how components work when plugged together.
Lot of work has been devoted to the analysis of formal
descriptions of SAs. Our concern is not in the analysis
of the consistency and correctness of the SA, but rather
on exploiting the information described at the SA level
to drive the testing of the implementation. In other
words we assume the SA description is correct and we
are investigating approaches to specication-based integration
testing, whereby the reference model used to
generate the test cases is the SA description.
In general, deriving a functional test plan means to identify
those classes of behavior that are relevant for
testing purposes. A functional equivalence class collects
all those system executions that, although dierent
in details, carry on the same informative contents for
functional verication. The tester's expectation/hope
is that any test execution among those belonging to a
class would be equally likely to expose possible non con-
formities to the specication.
We identify interesting test classes for SA-based testing
as sequences of interactions between SA components.
More precisely, starting from an architectural descrip-
tion, carrying on both static and dynamic information,
we rst derive a Labelled Transition System (LTS), that
graphically describes the SA dynamics. The problem is
that the LTS provides a global, monolithic description
of the set of all possible behaviors of the system. It
is a tremendous amount of information
attened into a
graph. It is quite hard for the software architect to single
out from this global model relevant observations of
system behavior that would be useful during validation.
We provide the software architect with a key to decipher
the LTS dynamic model: the key is to use abstract views
of the LTS, called ALTSs, on which he/she can easily
visualize relevant behavioral patterns and identify those
ones that are more meaningful for validation purposes.
Test classes in our approach correspond to ALTS paths.
However, once test class selection has been made, it is
necessary to return to the LTS and retrieve the information
that was hidden in the abstraction step, in order
to identify LTS paths that are appropriate renements
of the selected ALTS paths. This is also supported by
our approach.
In the following we describe in detail the various steps
of the proposed approach in the scope of a case study.
In Section 2 we provide the background information:
we recall the Cham formalism, that is used here for SA
specication, and outline the case study used as a working
example. In Section 3 we provide a general overview
of the approach. In Section 4, we give examples of using
the approach, and address more specic issues. In
Section 5, we clarify better the relation between ALTS
paths and test specications. Finally, in the Conclu-
sions, we summarize the paper contribution and address
related work.
ADL DYNAMICS
architectures represent the overall system
structure by modelling individual components and their
interactions. An SA description provides a complete
system model by focussing on architecturally relevant
abstractions. A key feature of SA descriptions is
their ability to specify the dynamics. Finite State Machines
Petri Nets or Labelled Transition Systems
(LTSs) can be used to model the set of all possible
SA behaviors as a whole.
In the following subsection we brie
y recall the Cham
description of SA. From this description we derive an
LTS which represents the (global) system behavior of a
concurrent, multi-user software system.
The Cham Model
The Cham formalism was developed by Berry and
Boudol in the eld of theoretical computer science for
the principal purpose of dening a generic computational
framework [4].
Molecules constitute the basic elements
of a Cham, while solutions S 0 are multisets
of molecules interpreted as dening the states of a
Cham. A Cham specication contains transformation
rules
dictating the way solutions can evolve (i.e., states
can change) in the Cham. Following the chemical
metaphor, the term reaction rule is used interchangeably
with the term transformation rule. In the follow-
ing, with abuse of notation, we will identify with R both
the set of rules and the set of corresponding labels.
The way Cham descriptions can model SAs has already
been introduced elsewhere [14]. Here we only summarize
the relevant notions. We structure Cham specications
of a system into four parts:
1. a description of the syntax by which components of
the system (i.e., the molecules) can be represented;
2. a solution representing the initial state of the system
3. a set of reaction rules describing how the components
interact to achieve the dynamic behavior of
the system.
4. a set of solutions representing the intended nal
states of the system.
The syntactic description of the components is given
by a syntax by which molecules can be built. Following
Perry and Wolf [17], we distinguish three classes of
components: data elements, processing elements, and
connecting elements. The data elements contain the information
that is used and transformed by the processing
elements. The connecting elements are the \glue"
that holds the dierent pieces of the architecture to-
gether. For example, the elements involved in eecting
communication among components are considered connecting
elements. This classication is re
ected in the
syntax, as appropriate.
The initial solution corresponds to the initial, static conguration
of the system. We require the initial solution
to contain molecules modeling the initial state of each
component. Transformation rules applied to the initial
solution dene how the system dynamically evolves
from its initial conguration. One can take advantage of
this operational
avor to derive an LTS out of a Cham
description. In this paper we will not describe how an
LTS can be derived (see [11]). We only recall the LTS
denition we will rely on.
Denition 2.1 A Labelled Transition System is a
quintuple (S; is the set of states,
L is the set of labels, S 0 2 S is the initial state, SF S
is the set of nal states and
is the transition relation.
Each state in the LTS corresponds to a solution, therefore
it is made of a set of molecules describing the states
of components. Labels on LTS arcs denote the transformation
rule that lets the system move from the tail
node state to the head node state.
We also need the denition of a complete path:
Denition 2.2 Let
l 1
l 2
l 3
be a path in an LTS. p is complete if S 0
is the initial
solution and Sn is a nal one.
Although our approach builds on the Cham description
of a SA it is worthwhile stressing that it is not committed
to it. Our choice of the Cham formalism is dictated
by our background and by its use in previous case stud-
ies. We are perfectly aware that other choices could be
made and want to make clear that the use of a specic
formalism is not central to our approach.
In more general terms what we have done so far can
be summarized as follows: we have assumed the existence
of an SA description in some ADL and that from
such description an LTS can be derived, whose node and
arc labels represent respectively states and transitions
relevant in the context of the SA dynamics. We also
User1
Router Server
Timer
AlarmUR AlarmRS (c)
Check1
AckSR (c)
User2
AlarmUR1
Check2
Check
Figure
1: Processes and Channels
assume that states contain information about the single
state of components and that labels on arcs denote
relevant system state transitions.
The TRMCS Case-Study
The Teleservice and Remote Medical Care System
(TRMCS) [2] provides monitoring and assistance to
users with specic needs, like disabled or elderly people.
The TRMCS is being developed at Parco Scientico e
Tecnologico d'Abruzzo, and currently a Java prototype
is running and undergoes SA based integration testing.
A typical TRMCS service is to send relevant information
to a local phone-center so that the family and medical
or technical assistance can be timely notied of critical
circumstances. We dene four dierent processes (User,
Router, Server and Timer), where:
User: sends either an \alarm" or a \check" message
to the Router process. After sending an alarm, it
waits for an acknowledgement from the Router.
Router: waits signals (check or alarm) from User.
It forwards alarm messages to the Server and checks
the state of the User through the control messages.
Server: dispatches help requests.
Timer: sends a clock signal for each time unit.
Figure
1 shows the static TRMCS Software Architectural
description, in terms of Component and Connec-
tors. Boxes represent Components, i.e., processing el-
ements, arrows identify Connectors, i.e., connecting elements
(in this case channels) and arrows labels refer
to the data elements exchanged through the channels.
Figure
shows only the reaction rules of the TRMCS
Cham Specication.
A portion of the LTS of the TRMCS SA is given in Fig.
3. The whole LTS is around 500 states. Note that arc
labels 0, 1, ., 21 correspond, respectively, to T 0 , T 1 ,
., T 21 , i.e., the labels of the TRMCS reaction rules,\0"
denotes the initial state and box states denote pointers
to states elsewhere shown in the picture (to make the
Reaction Rules
T1: User1= User1.o(check1), User1.o(alarmUR1).i(ackRU1)
T2: User2= User2.o(check2), User2.o(alarmUR2).i(ackRU2)
T3: User1.o(check1)= o(check1).User1
T4: User2.o(check2)= o(check2).User2
T11: o(alarmUR1).i(ackRU1).User1, i(alarmUR).o(alarmRS).i(ackSR).o(ackRU).Router
T12: o(alarmUR2).i(ackRU2).User2, i(alarmUR).o(alarmRS).i(ackSR).o(ackRU).Router
T13: o(alarmRS1).i(ackSR1).o(ackRU1).Router , i(alarmRS).o(ackSR).Server
T14: o(alarmRS2).i(ackSR2).o(ackRU2).Router , i(alarmRS).o(ackSR).Server
T15: o(ackSR1).Server,
T17: o(ackRU1).Router ,
T18: o(ackRU2).Router ,
T19: m1.Router, Timer, Sent = o(nofunc).Router, m1.Router, NoSent
Figure
2: TRMCS Cham Reaction Rules
graph more readable). Double arrows denote the points
in which the gure cuts LTS paths.
In this section we introduce our approach to SA-based
testing. Our goal is to use the SA specication as a reference
model to test the implemented system. Needless
to say, there exists no such thing as an ideal test plan
to accomplish this goal. It is clear, on the contrary,
that from the high-level, architectural description of a
system, several dierent SA-based test plans could be
derived, each one addressing the validation of a specic
functional aspect of the system, and dierent interaction
schemes between components.
Therefore, what we assume as the starting point for our
approach is that the software architect, by looking at
the SA from dierent viewpoints, chooses a set of important
patterns of behavior to be submitted to testing.
This choice will be obviously driven by several factors,
including specicity of the application eld, criticality,
schedule constraints and cost, and is likely the most crucial
step to a good test plan (we give examples of some
possible choices in Section 4).
With some abuse of terminology, we will refer to each
of the selected patterns of behavior as to an SA testing
criterion. With this term we want to stress that our
approach will then derive a dierent, specic set of tests
so as to fulll the functional requirements that each
\criterion" (choice) implies. In general more SA testing
criteria can be adopted to test a SA.
Two remarks are worth noting here. One is that as
the derived tests are specically aimed at validating the
high-level interactions between SA components, the test
plans we develop apply to the integration test stage.
The second remark is that since we are concerned with
testing, i.e., with verifying the software in execution, we
will greatly base our approach on the SA dynamics. In
particular, starting from a selected SA testing criterion,
we will primarily work on the SA LTS and on other
graphs derived from the latter by means of abstraction
(as described in the following).
Introducing obs-functions over SA dynamics
An SA testing criterion is initially derived by the software
architect in informal terms. We want to translate
it in a form that is interpretable within the context of
the SA specication, in order to allow for automatic
processing.
Intuitively, an SA testing criterion abstracts away not
interesting interactions. Referring to the Cham formal-
ism, an SA testing criterion naturally partitions the
Cham reaction rules into two groups: relevant interactions
(i.e., those ones we want to observe by testing)
and not relevant ones (i.e., those we are not interested
in). This suggests to consider an interpretation domain
D, where to map the signicant transformation rules
(i.e., the arc labels of the LTS), and a distinct element
, where to map any other rule.
Figure
3: A portion of the TRMCS LTS
We therefore associate to an SA testing criterion an obs-
function. This is a function that maps the relevant reaction
rules of the Cham SA description to a particular
domain of interest D. More precisely, we have:
The idea underlying the set D is that it expresses a
semantic view of the eect of the transition rules on the
system global state.
From LTS to ALTSs
We use the obs-function just dened as a means to derive
from the LTS an automaton still expressing all high
level behaviors we want to test according to the selected
SA testing criterion, but hiding any other unrelevant be-
haviors. The automaton is called an ALTS (for Abstract
LTS).
This is the LTS that is obtained by relabelling, according
to the function obs, each transition in R(S 0
), and by
minimizing the resulting automaton with respect to a selected
equivalence (trace- or bisimulation-based equiv-
alence), preserving desired system properties (as discussed
in [5]).
If we derive a complete path over an ALTS (see De-
nition 2.2), this quite naturally corresponds to the high
level specication of a test class of the SA (we give examples
in the next section). Therefore, the task of deriving
an adequate set of tests according to a selected
SA testing criterion is converted to the task of deriving
a set of complete paths appropriately covering the
ALTS associated with the criterion via an obs-function.
In an attempt of depicting a general overview of the ap-
proach, we have so far deliberately left unresolved some
concrete issues. Most importantly, what does it mean
concretely to look at the SA from a selected observation
point, i.e., which are meaningful obs-functions? And,
also, once an ALTS has been derived, how are paths
on it selected? Which coverage criterion could be ap-
plied? We will devote the next section to answer these
questions, with the help of some examples regarding the
TRMCS case study.
Considering the informal description of the TRMCS in
Section 2, because of obvious safety-critical concerns,
we may want to test the way an Alarm message
ows
in the system, from the moment a User sends it
to the moment the User receives an acknowledgement.
Casting this in the terms used in the previous section,
issues an Alarm msg
receives an Ack
issues an Alarm msg
obs (T receives an Ack
For any other T i , obs (T i
Figure
4: Alarm
ow: Obs-function
ReceiveAck1
ReceiveAck2
ReceiveAck1
ReceiveAck2
ReceiveAck1
A
ReceiveAck2
Figure
5: Alarm
the software architect may decide that an important
SA testing criterion is \all those behaviors involving the
ow of an Alarm message through the system".
From this quite informal specication, a corresponding
obs-function could be formally dened as in Figure 4. As
shown, we have included in the interpretation domain D
all and only the Cham transition rules that specically
involve the sending of an Alarm message by a User, or
the User's reception of an acknowledgement of the Alarm
message from the Router. Note that this information
is encoded, at the LTS level, in the arcs labels.
With reference to this obs-function, and applying reduction
and minimization algorithms (in this case we
have minimized with respect to trace equivalence, since
it preserves paths), we have derived the ALTS depicted
in
Figure
5 (the shaded circle represents the initial
state, that in this example also coincides with the only
nal one). This ALTS represents in a concise, graphical
way how the Alarm
ow is handled: after an Alarm is
issued (e.g., SendAlarm1), the system can nondeterministically
react with one of two possible actions (elaborat-
ing this Alarm and sending back an Acknowledgement
(ReceiveAck1) or receiving another Alarm message from
another User (SendAlarm2)).
Note the rather intuitive appeal of such a small graph
with regard to the (much more complex) complete LTS
(for the TRMCS it is one hundred times bigger). One
could be tempted to consider some rather thorough coverage
criterion of the ALTS, such as taking all complete
paths derivable by xing a maximum number of cycles
iterations. However, as we will see better in the next
section, each ALTS path actually will correspond to
sends the first Check msg
sends a further Check msg
sends the first Check msg
sends a further Check msg
obs every User has sent a Check msg
obs (T 21 some User has not sent a Check msg
For any other T i , obs (T
Figure
Check
ow: Obs-function
many concret test cases. Therefore, less thorough coverage
criteria seem more practical. In particular, we found
that McCabe's technique of selecting all basic paths [16]
oers here a good compromise between arc and path
coverage. A list of ALTS test paths derived according
to McCabe's technique is the following:
Let us consider, for example, Paths No. 2, 3 and 4.
These three paths are all devoted to verify that the
system correctly handles the consecutive reception of
two Alarm messages issued by two distinct Users. By
putting these three ALTS paths in the list, we explicitly
want to distinguish in the test plan the cases that: i)
each Ack message is sent rightly after the reception of
the respective Alarm message (Path3); or, the acknowledgements
are sent after both Alarms are received and
ii) in the same order of Alarm receptions (Path4); or
nally iii) in the opposite order (Path2). So the three
test classes are aimed at validating that no Alarm message
in a series of two is lost, whichever is the order they
are processed in.
Still considering the TRMCS, the software architect
could decide that also the Check
ow is worth testing.
Thus, in analogous way to what we have done for the
Alarm
ow, the Check
ow obs-function is derived in
Figure
6 and the corresponding ALTS is depicted in
Figure
7. It represents a dierent \observation" of the
TRMCS behavior.
We have reasoned so far in the hyphotetical scenario of
the TRMCS system being developed and of a software
architect that is deriving interesting architectural behaviors
to be tested. An alternative scenario could be
that the TRMCS is already functioning, and that one
CheckERR
CheckERR
CheckERR
CheckERR
CheckERR
CheckOK
CheckERR
CheckOK
CheckERR
I
A
F
G
CheckOK
CheckERR
Figure
7: Check
of the components is being modied, but we don't want
that this change aects the SA specication. We want
then to test whether the modied component still interacts
with the rest of the system in conformance to the
SA original description. In this case, the observation
point of the software architect will be \all the interactions
that involve this component". If, specically, the
component being modied is the Server, then the corresponding
obs-function is given in Figure 8. McCabe's
coverage criterion yields the following set of test classes:
This example evidences that even in deriving the basic
ALTS paths we do not blindly apply a coverage cri-
terion, but somehow exploit the semantics behind the
elements in D. For instance, consider Path5 above. If
we interpret it in light of McCabe's coverage criterion,
it is aimed at covering transition FRa2 from State A to
State C. The shorter path A C A would be equally good
for this purpose. But for functional testing this shorter
path is useless, because it would be perfectly equivalent
to the already taken Path2 A B A: both paths in fact
test the forwarding of one Alarm message to the Server.
Therefore, to cover the transition from A to C we have
instead selected the longer path A B A C A that serves
the purpose to test the consecutive forwarding of two
Alarms.
From Router
obs From Router
To Router
To Router
obs (T 21 From Router
For any other T , obs (T
Figure
8: Component Based: Obs-function
FRno
FRno FRno
FRno
A
Figure
9: Component Based: ALTS
CATION
ALTS paths specify functional test classes at a high abstraction
level. One ALTS path will generally correspond
to many concrete test cases (i.e., test executions
at the level of the implemented system).
It is well-known that several problems make the testing
of concurrent systems much more di-cult and expensive
than that of sequential systems (for reasons of space we
do not discuss these problems in depth here; see, e.g.,
[9]). Said simply, a trade-o can be imagined in general
between how tightly is the test specication of an event
sequence given, and how much eort will be needed
by the tester to force the execution of that sequence.
The point is that the tester, on receiving the high level
test specications corresponding to ALTS paths, could
choose among many concrete test executions that conform
to them. For example, considering Path2 A B D
B A on Fig. 5, a possible test execution can include
the sending of an Alarm message from User1 immediately
followed by the sending of an Alarm message from
User2; another test execution could as well include, between
the two Alarm messages, the Router reception
of other messages, e.g., a Check or a Clock, and still
conform to the given high level test specication.
This
exibility in rening test specications descends
from the fact that to derive the ALTS from the complete
LTS we have deliberately abstracted away transitions
not involving the Alarm
ow from and to a User.
In other words, more concrete executions, however dif-
ferent, as long as they conform to Path2, all belong to a
same test class according to the selected SA-based test
criterion (obs-function).
But after an ALTS-based list of paths has been cho-
sen, we can go back to the complete LTS and observe
what the selected abstraction is hiding, i.e., we can precisely
see on the SA LTS which are the equivalence
assumptions 1 behind ALTS paths (test classes) selection
This is a quite attractive feature of our approach for SA-
based test class selection. When functional test classes
are derived ad-hoc (manually), as is often the case
for the high level test stages, equivalence assumptions
those test classes rely upon remain implicit, and are
hardly recoverable from the system specication. In our
approach, rst an explicit abstraction step is required
(ALTS derivation). Second, going back from the ALTS
to the complete LTS, we can easily identify which and
how many LTS paths fulll a given ALTS path.
We can better explain this by means of an example.
Considering the ALTS for the Alarm
ow (Fig. 5), State
B is equivalent (under the test assumptions made)
roughly to forty states in the complete LTS (of course,
we can automatically identify all of them). Not only,
but there are more valid LTS subpaths that we could
traverse to reach each of these forty states. The valid
subpaths for this example are all those going from the
initial state S 0 of the LTS to any of the forty states equivalent
to state B of the ALTS, without including any
of the transformation rules in the domain D dened for
the Alarm
ow obs-function, except for the last arc that
must correspond to the transformation rule T 11 . All of
these (many) subpaths would constitute a valid rene-
ment of the abstract SendAlarm1 transition in Path2.
As such a renement should be applied to each state and
each arc of the ALTS paths, it is evident then how the
number of potential LTS paths for one ALTS path soon
becomes huge. We cannot realistically plan test cases
for all of them; so the pragmatic question is: how do
we select meaningful LTS paths (among all those many
rening a same ALTS path)? We don't believe that a
completely automatic tool, i.e., a smart graph processing
algorithm, could make a good choice. There are not
only semantic aspects of the functional behavior that a
tool could not capture, but also several non-functional
1 The term \equivalence"
here refers to its usual meaning in the testing literature, i.e.,
it denotes test executions that are interchangeable with respect
to a given functional or structural test criterion, and not to the
more specic trace/bisimulation equivalence used so far for graph
Figure
10: An LTS test path
factors to take into account. What we prospect, there-
fore, is that the software architect, with the indispensable
support of appropriate graphical tools and processing
aids, can exploit on one side his/her semantic
knowledge of the SA dynamics to discern between LT-
S paths that are equivalent with respect to an ALTS
abstraction. On the other side, he/she will also take
into account other relevant factors, not captured in the
SA description, such as safety-critical requirements, or
time and cost constraints. Thus we nally expect that
the software architect produces from the list of ALTS
paths a rened list of LTS paths, and give this list to
the tester as a test specication for validating system
conformance to SA.
In Fig. 12, for instance, we show an LTS path that is
a valid renement of Path2 for the Alarm
ow ALT-
S. This example is the shortest path we could take to
instantiate the ALTS path, in that it only includes indispensable
TRMCS transformation rules to fulll the
path. We precise that S 15 (lled in light gray) in particular
is the state equivalent to State B of the Alarm
ow ALTS (note in fact that the entering arc is labelled
11). Another of the forty LTS states equivalent to B is
Fig. 3). There is a semantic dierence
between S 15 and S 159 that could be relevant for integration
testing purposes. Before reaching S 159 , i.e., before
User1 sends an Alarm, User2 can send a Check message,
while this is never possible for any of the LTS subpaths
reaching S 15 . We could see this from analyzing the state
information that is associated with LTS nodes. In the
renement of Path2, the software architect could then
decide to pick one LTS path that includes S 159 in order
to test that a Check from another user does not interfere
with an Alarm from a certain user.
6 CONCLUSIONS
The contribution of this paper consists of an approach
to the use of the architectural description of a system
to dene test plans for the integration testing phase of
the system implementation. The approach starts from a
correct architectural description and relies on a labelled
transition system representation of the architecture dynamics
By means of an observation notion the software architect
can get out of the dynamic model suitable abstractions
that re
ect his/her intuition of what is interesting
or relevant of the system architectural description with
respect to a validation step. To be eective this step
relies on the software architect judgement and semantic
knowledge of the SA functional and non-functional
characteristics.
In summary, the proposed approach consists of the following
steps:
1. the software architect selects some interesting SA
testing criteria;
2. each SA testing criterion is translated into an obs-
in some case, a criterion could also identify
several related obs-functions;
3. For each obs-function, an ALTS is (automatically)
derived from the global LTS corresponding to the
SA specication;
4. On each derived ALTS, a set of coverage paths is
generated according to a selected coverage criteri-
on. Each path over the ALTS corresponds to the
high-level specication of a test class;
5. For each ALTS path, the software architect, by tool
supported inspection of the LTS, derives one or
more appropriate LTS paths, that specify more rened
transition sequences at the architectural level.
Our approach allows the software architect to move
across abstractions in order to get condence in his/her
choices and to better select more and more rened test
plans. It is worth noticing that in our approach a test
plan is a path, that is not only a sequence of events (the
labels on a path) but also a set of states which describe
the state of the system in terms of the single state com-
ponents. This is a much more informative test plan with
respect to the one that could be derived from e.g., the
requirements specications. In fact, using the SA LTS,
we also provide the tester with information about state
components that can be used to constrain the system
to exercise that given path.
Related Work
Lot of work has been devoted to testing concurrent and
real-time systems, both specication driven and implementation
based [9, 15, 8, 1]. We do not have room
here to carry out a comprehensive survey; we will just
outline some main dierences with our approach. These
works addressed dierent aspects, from modelling time
to internal nondeterminism, but all focus on unit test-
ing, that is they either view the concurrent system as
a whole or specically look at the problem of testing a
single component when inserted in a given environmen-
t. Our aim is dierent, we want actually to be able to
derive test plans for integration testing. Thus although
the technical tools some of these approaches use are obviously
the same of ours (e.g., LTS, abstractions, event
sequences), their use in our context is dierent. This
goal dierence emerges from the very beginning of our
we work on an architectural description that
drives our selection of the abstraction, i.e., the testing
criterion, and of the paths, i.e., the actual test classes.
Our approach of dening ALTS paths for specifying high
level test classes has lot in common with Carver and
Tai's use of Sequencing Constraints for specication-
based testing of concurrent programs [9]. Indeed, sequencing
constraints specify restrictions to apply on the
possible event sequences of a concurrent program when
selecting tests, very similarly to what ALTS paths do for
a SA. In fact, we are currently working towards incorporating
within our framework Carver and Tai's technique
of deterministic testing for forcing the execution of the
event sequences (rened LTS paths) produced with our
approach.
As far as architectural testing is concerned, the topic
has raised interest and received a good deal of attention
in recent years [19, 6, 20]. Our approach indeed stems
from this ground. Though, up to our knowledge, besides
our project no other attempt at concretely attacking the
problem has been pursued so far.
The Future
Our aim is to achieve a usable set of tools that would
provide the necessary support to our approach. So far
we have experimented our approach on the described
case study of which a running Java prototype exists.
The way we did it was not completely automatically
supported with respect to the ALTS denitions, the
criterion identication and path selection. As tool support
we could rely on a LTS generator starting from the
Cham description, which also allows for keeping track
of the state and arc labels. Work is ongoing to generalize
it to ALTS generation and to implement a graphical
front-end for Cham descriptions. We denitely believe
that the success of such an approach heavily depends on
the availability of simple and appealing supporting tool-
s. Our eort goes in two directions, on one side we are
investing on automating our approach and we would also
like to take advantage of other existing environments
and possibly integrate with them, e.g. [13, 10], on the
other we are involved in more experimentation. The
latter is not an easy job. Experimenting our approach
requires the existence of a correct architectural description
and a running implementation. The case study presented
here could be carried out since the project was
entirely managed under our control, from the requirements
specication to the coding. This is obviously not
often the case. The results we got so far are quite satisfactory
and there are other real world case studies we
are working on at the moment. For them we have already
a running implementation and we have been asked
to give a model of their architectural structure. We are
condent these will provide other interesting insights to
validate our approach.
--R
Design of a Toolset for Dynamic Analysis of Concurrent Java Programs.
Software Architecture in Practice.
The Chemical Abstract Machine.
An Approach to Integration Testing Based on Architectural Descriptions.
Cots integration: Plug and pray?
A Practical and Complete Algorithm for Testing Real-Time Systems
Use of Sequencing Constraints for Speci
Uncovering Architectural Mismatch in Component Be- havior
Architectural Why reuse is so hard.
The Project.
Formal Speci
Generating Test Cases for Real-Time Systems from Logic Speci cations
A Complexity Measure.
Foundations for the Study of Software Architecture.
"http://www.rational.com/uml/index.jtmpl"
Software testing at the architectural level.
Software Architecture: Perspectives on an Emerging Discipline.
Component Software.
--TR
The chemical abstract machine
Foundations for the study of software architecture
The concurrency workbench
Formal Specification and Analysis of Software Architectures Using the Chemical Abstract Machine Model
Generating test cases for real-time systems from logic specifications
Software architecture
Software testing at the architectural level
Component software
Software architecture in practice
Use of Sequencing Constraints for Specification-Based Testing of Concurrent Programs
Uncovering architectural mismatch in component behavior
COTS Integration
Architectural Mismatch
A Practical and Complete Algorithm for Testing Real-Time Systems
An approach to integration testing based on architectural descriptions
--CTR
Mauro Cioffi , Flavio Corradini, Specification and Analysis of Timed and Functional TRMCS Behaviours, Proceedings of the 10th International Workshop on Software Specification and Design, p.31, November 05-07, 2000
Myra B. Cohen , Matthew B. Dwyer , Jiangfan Shi, Coverage and adequacy in software product line testing, Proceedings of the 2006 workshop on Role of software architecture for testing and analysis, p.53-63, July 17-20, 2006, Portland, Maine
Holger Giese , Stefan Henkler, Architecture-driven platform independent deterministic replay for distributed hard real-time systems, Proceedings of the 2006 workshop on Role of software architecture for testing and analysis, p.28-38, July 17-20, 2006, Portland, Maine
Antonia Bertolino , Paola Inverardi , Henry Muccini, An explorative journey from architectural tests definition down to code tests execution, Proceedings of the 23rd International Conference on Software Engineering, p.211-220, May 12-19, 2001, Toronto, Ontario, Canada
Luciano Baresi , Reiko Heckel , Sebastian Thne , Dniel Varr, Modeling and validation of service-oriented architectures: application vs. style, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Henry Muccini , Antonia Bertolino , Paola Inverardi, Using Software Architecture for Code Testing, IEEE Transactions on Software Engineering, v.30 n.3, p.160-171, March 2004
Hong Zhu , Lingzi Jin , Dan Diaper , Ganghong Bai, Software requirements validation via task analysis, Journal of Systems and Software, v.61 n.2, p.145-169, March 2002
Paola Inverardi, The SALADIN project: summary report, ACM SIGSOFT Software Engineering Notes, v.27 n.3, May 2002 | labelled transition systems;functional test plans;software achitectures;integration testing |
|
337220 | Three approximation techniques for ASTRAL symbolic model checking of infinite state real-time systems. | ASTRAL is a high-level formal specification language for real-time systems. It has structuring mechanisms that allow one to build modularized specifications of complex real-time systems with layering. Based upon the ASTRAL symbolic model checler reported in [13], three approximation techniques to speed-up the model checking process for use in debugging a specification are presented. The techniques are random walk, partial image and dynamic environment generation. Ten mutation tests on a railroad crossing benchmark are used to compare the performance of the techniques applied separately and in combination. The test results are presented and analyzed. | Introduction
ASTRAL is a high-level formal specification language
for real-time systems. It includes structuring mechanisms
that allow one to build modularized specifications
of complex systems with layering [9]. It has been successfully
used to specify a number of interesting real-time
systems [1, 2, 9, 10, 11, 12]. The ASTRAL Software
Development Environment (SDE) [20, 22] is an integrated
set of design and analysis tools , which includes,
among others, an explicit-state model checker, a symbolic
model checker and a mechanical theorem prover.
The explicit-state model checker [12, 22] generates customized
C++ code for each specification and enumerates
all the branches of execution of this implementation
up to a system time bound set by the user. The symbolic
model checker [13] tests specifications at the process
level and requires only limited input to set up constant
values. Its model checking procedure uses the Omega
library [23] to perform image computations on the execution
tree of an ASTRAL process that is trimmed by
the execution graph of the process. In [13], the symbolic
model checker was used to test a railroad crossing
benchmark. In those experiments the model checker
aborted before completion for two of the test cases due
to the extremely large size of the specification instances.
Because the model checkers in the ASTRAL SDE are
only intended to be used for debugging purposes, it is
reasonable to use lower approximation techniques that
allow the search procedure to complete, while still remaining
effective in finding violations. Although the
lower approximation techniques calculate only a subset
of the reachable states, these techniques will not cause
false negativities. This is because the properties specified
using ASTRAL are essentially safety properties.
In this paper, three techniques to meet this need are
introduced. They are random walk, partial image and
dynamic environment generation. The idea of random
walk techniques and partial image techniques is not new,
although we are not aware of their use in symbolic model
checking. The name "random walk" is borrowed from
the theory of stochastic processes. This technique is
used to allow the model checker to randomly skip a number
of branches when traversing the execution tree. The
partial image technique is inspired by sampling and random
testing methods [16, 15] in software testing. How-
ever, instead of picking a single sample from the domain,
the partial image technique selects a subset of the image
and uses this subset to calculate the postimage at
each node. The dynamic environment generation technique
[14] generates a different sequence of imported
variable values for different execution paths. It is similar
to the idea of Colby, Godefroid and Jagadeesan [8]
in that both address the problem of automatically closing
an open system, in which some of the components
are not present. Their approach targets concurrent programs
(written in C) and is based upon static analysis
of a program to translate it into a self-executable closed
form. By considering real-time specifications, the approach
presented in this paper dynamically selects a reasonable
environment according to the imported variable
clause and the previous environment. The reason for
considering the previous environment is that, as stated
later, ASTRAL is a history-dependent specification lan-
guage. As a case study, ten mutation tests [21] of a
railroad crossing benchmark are used to show their effectiveness
in finding bugs, and the performance of the
model checker is compared when the three approximation
techniques are used separately and in combination.
In [4, 5] Bultan used the Omega library as a tool to symbolically
represent a set of states that is characterized
by a Presburger formula. He also investigated partitions
and approximations in order to calculate fixed points.
As in the work reported in this paper, Bultan worked
with infinite state systems. However, the systems Bultan
considered are "simple" in the following sense: (1)
quantifications are only limited to a very small number,
(2) the transition system is a straightforward history-
independent transition system; i.e., the current state
only depends upon the last state, and other history references
are not allowed, (3) the transition system itself
is not a real-time system in the sense that no duration is
attached to a transition and the start and end times are
not allowed to be referenced. Unfortunately, a typical
ASTRAL specification, such as the benchmark considered
in this paper, is not "simple". For these complex
systems, a fixed point may not even exist. However, because
the ASTRAL symbolic model checker is primarily
intended to be used as a debugger instead of a verifier,
calculating the fixed point of a transition system is not
an important issue. Therefore, Bultan's approaches can
be considered to be orthogonal to the approaches presented
in this paper.
The model checker considered in this paper is modular-
ized; one need only check one process instance for each
process type declared, without looking at the transition
behaviors of other process instances. The STeP system
also uses a modularized approach [7, 6]. However, STeP
primarily uses a theorem prover to validate a property
while the approach presented here uses a fully automatic
model checker.
The remainder of this paper is organized as follows. In
section 2, a brief overview of the ASTRAL specification
language is presented, along with an introduction
to the ASTRAL modularized proof theory. In section
3, the ASTRAL symbolic model checker and three approximation
techniques are presented. Section 4 gives
the results of using the techniques separately and in
combination on ten mutation tests, and it analyzes the
results. Finally, in section 5, conclusions are drawn from
this work, and future areas of research are proposed.
2 Overview of ASTRAL
A railroad crossing specification is used as a benchmark
example throughout the remainder of this paper. The
system description, which is taken from [19], consists of
a set of railroad tracks that intersect a street where cars
may cross the tracks. A gate is located at the crossing
to prevent cars from crossing the tracks when a train is
near. A sensor on each track detects the arrival of trains
on that track. The critical requirements of the system
are that whenever a train is in the crossing the gate
must be down and when no train has been in between
the sensors and the crossing for a reasonable amount of
time the gate must be up. The complete ASTRAL specification
of the railroad crossing system can be found at
http://www.cs.ucsb.edu/dang.
An ASTRAL system specification includes a global
specification and process specifications. The global
specification contains declarations of process instances,
global constants, nonprimitive types that may be shared
by process types, and system level critical requirements.
There is a process specification for each process type
declared in the global specification. Each process specification
consists of a sequence of levels, with the highest
level being an abstract view of the process being specified
Processes, Constants, Variables, and Types
The global specification begins with a process type declaration
PROCESSES
the gate: Gate,
the sensors: array [1.n tracks] of Sensor.
This declaration indicates that there is one process instance
of type Gate and n tracks process instances of
type Sensor in the system, where n tracks is a global
constant of type pos integer. In ASTRAL, primitive
types include Integer, Real, Boolean, ID and
Time. Additional types can be declared by using the
TYPEDEF construct. For instance, pos integer is defined
as follows:
pos integer: TYPEDEF i: integer (i ?0).
Each process instance has a unique identifier with type
ID. The ASTRAL specification function IDTYPE(i) represents
the type of the process with the identifier i. For
instance, the global declaration
sensor id: TYPEDEF i: id (IDTYPE(i)=Sensor).
represents all identifiers of process instances of type
nsor. In the railroad crossing specification there are
two process specifications Gate and Sensor, which correspond
to the two process types declared in the global
specification. A process specification includes an interface
section, which specifies the imported variables,
types, transitions and constants (from either the global
specification or exported by other processes) used by the
process, and the variables and transitions exported by
the process. ASTRAL does not have global variables.
Therefore, variables, as well as local constants, must be
declared in each process specification. ASTRAL supports
a modularized design principle: every variable is
associated with a unique process instance, and changes
to the variable can only be caused by the transitions
specified in that process instance. This is discussed further
in the next subsection.
Transitions
The ASTRAL computation model is defined by the execution
of state transitions, which are specified inside
process specifications. Each transition in a process instance
can only change the variables specified in that
instance. The body of an ASTRAL transition includes
pairs of entry and exit assertions with a nonzero duration
associated with each pair. The entry assertion must
be satisfied at the time the transition starts, whereas the
exit assertion will hold after the time indicated by the
duration from when the transition fires. For example,
in process Gate, the transition,
TRANSITION up
raising
now - End (raise) ?= raise time
specifies the gate being fully raised, after it has been
rising for a reasonable amount of time (raise time).
Start(T) and End(T) specify the last start and end
time of a transition T. Start(T,t) and End(T,t) are
predicates used to indicate that the last start and end of
transition T occurred at time t. A transition instance is
fired if its entry assertion is satisfied and no other transition
in the same process instance is executing. The
execution of this transition instance is completed after
the duration indicated in the transition specification, for
instance up dur above. An exported transition must be
called from the external environment in order to fire.
Call(T) is used to indicate the time when a call to
the exported transition T is made. ASTRAL broadcasts
variable values instantaneously at the time the execution
finishes. Other process instances may refer to these
variables as well as to the start and end times of transitions
under the assumption that these variables and
transitions are explicitly exported and the process instances
properly import them.
If it is the case that there is more than one transition instance
that is enabled inside the same process instance
and no other transition is executing, then one of the enabled
transitions is nondeterministically chosen to fire.
Inside a process instance, executions of transitions are
non-overlapping interleaved, while between process in-
stances, maximal parallelism is supported. Thus, the
execution of transition instances in different process instances
is truly concurrent.
Assumptions and Critical Requirements
Besides transitions, requirement descriptions are also included
as a part of an ASTRAL specification. They
comprise axioms, initial clauses, imported variable
clauses, environmental assumptions and critical require-
ments. Axioms are used to specify properties about
constants. An initial clause defines the system state at
startup time. An imported variable clause defines the
properties the imported variables should satisfy, for in-
stance, patterns of changes to the values of imported
variables and timing information about transitions exported
from other processes. An environment clause
formalizes the assumptions that must hold on the behavior
of the environment to guarantee some desired
system properties. Typically, it describes the pattern of
invocation of exported transitions. The critical requirements
include invariant clauses and schedule clauses.
An invariant expresses the properties that must hold
for every state of the system that is reachable from the
initial state, no matter what the behavior of the external
environment is. A schedule expresses additional
properties that must hold provided the external environment
and the other processes in the system behave
as assumed (i.e., as specified by the environmental assumptions
and the imported variable clauses). Both
invariants and schedules are safety properties.
ASTRAL is a rich language and has strong expressive
power. For a detailed introduction to ASTRAL and its
formal semantics the reader is referred to [9, 10, 22].
Modularized Proof Theory
In this paper, modularization means the principle that a
system specification can be broken into several loosely
independent functional modules. Although most high
level specification languages support modularization,
each module in the specification is only a syntactical
module. That is, these languages provide a way to
write a specification as several modules, however, there
is often no way to verify the correctness of each process
without looking at all the behaviors of all the other
processes. The ultimate goal of modularization is to
partition a large system, both conceptually and func-
tionally, into several small modules and to verify each
small module instead of verifying the large system as a
whole. This greatly eases both verification and design
work.
In ASTRAL, a process instance is considered as a
module. It provides an interface section including an
imported variable clause, which is an ASTRAL well-formed
formula, that can be regarded as an abstraction
of the behaviors of the other processes. This is a unique
feature in ASTRAL, which helps to develop a modular
verification theory for real-time systems. For example,
verifying the schedule and the invariant of each process
instance uses only the process's local assumptions
and behaviors. Thus, verifying the local invariant uses
only the behaviors of transitions of the process instance,
and verifying the local schedule uses the process's local
environment and imported variable clause, plus the behaviors
of the process's transitions. Finally, because the
imported variable clause must be a correct assumption,
it needs to be verified by combining all the invariants
from all the other process instances. At the global level,
the global invariant of an ASTRAL specification can be
verified by using only the invariants for all process in-
stances, without looking at the details of each process
instance's behavior. Similarly, the global schedule can
be verified by using only the global environment and the
schedules for all process instances. Due to page limita-
tions, the ASTRAL proof theory can not be presented
in detail in this paper. The interested reader should see
[10]
3 Approximation techniques for the ASTRAL
symbolic model checker
In this section, an overview of the ASTRAL symbolic
model checker is given along with the motivation for
introducing the approximation techniques. Next, each
of the techniques is formulated in more detail.
An overview of the ASTRAL symbolic model
checker
A prototype implementation of the ASTRAL symbolic
model checker for a nontrivial subset of ASTRAL is
given in [13]. This prototype uses the Omega library
[23] as a tool to symbolically represent a set of states
that are characterized by a Presburger formula, which is
an arithmetic formula over integer variables that is built
from logical connectives and quantifiers. The Omega
library provides rich operations on Omega sets and re-
lations, such as join, intersection and projection. These
operations are used in the image computations in the
symbolic model checker.
The symbolic model checker presented in this paper is
implemented as a process level model checker, based
upon the modularized ASTRAL proof theory. For each
process type that is globally declared, one only needs to
check one process instance's critical requirements. Each
ASTRAL process declaration P can be translated into
a labeled transition system
that consists of a set Q of (infinitely many) states, a
finite set of transitions ! a with name a from \Sigma. \Sigma consists
of all the transition names declared in P as well as
two special transitions idle and initial. Each ! a is a
relation on Q, i.e., ! a ' Q \Theta Q: Init ' Q denotes the
initial states. The assumption Assump and property
Prop of T are also subsets of states Q. T is further
restricted to have a form in which the components Q,
are Presburger formu-
las. As usual, for a set of states R ' Q, one can denote
the preimage P re a (R) of a transition ! a as the set of
all states from which a state in R can be reached by
this transition: P re a
The postimage P ost a (R) of a transition ! a is the set
of all states that are reachable from a state in R by
this transition: P ost a
The semantics of T is characterized by runs q 0 a
such that for all i,
correct with respect to its specification, if for any run
the following condition is satisfied for
all k, fq
initial
lower
raise
up
Figure
1: The execution graph of Gate
initial
lower
down raise idle lower idle
idle
Figure
2: The execution tree of Gate
The model checking procedure starts by constructing
the execution graph G of P . The graph G is a
where T i represents a transition declared in P . initial
indicates the initial transition, which is defined as an
identity transition on the initial states with zero dura-
tion. idle is a newly introduced transition, which has
duration one. idle fires if every T i is not firable and no
transition is currently executing. idle will not change
the values of any local variables. RG ' VG \ThetaV G excludes
all the pairs of transitions such that the second transition
is not immediately firable after the first one finishes.
G is automatically constructed using the Omega library
by analyzing the initial conditions and the entry and
exit assertions of each ASTRAL transition in the process
Figure
1 is an example of the execution graph for
the Gate process in the railroad crossing specification.
A dashed arrow in Figure 1 means that zero or more
idle transitions are executed to reach the next node.
The model checking procedure is carried out on the tree
of all possible execution paths trimmed by the execution
graph G. Figure 2 is part of the execution tree for the
Gate process. Starting from the initial node initial in
the execution tree of P , the model checker calculates the
image of the reachable states on every node along each
path up to a user-assigned search depth. Each image is
checked against the assumption Assump and the property
Prop in order to detect potential errors. A number
of techniques are also used to dynamically resolve the
values of variables according to the path that is being
searched. These techniques reduce the number of variables
used in the actual image calculation. Whenever an
error is found, the model checker generates a concrete
specification level trace leading to this error.
Three approximation techniques
In this subsection, three approximation techniques are
presented to speed up the ASTRAL symbolic model
checker. The techniques are random walk, partial image
and dynamic environment generation.
Motivation for introducing the approximation techniques
As mentioned earlier, the ASTRAL symbolic model
checker performs process level model checking by checking
only one process instance for each process type declared
in the global system. The correctness of doing
this is ensured by the modularized proof theory of AS-
TRAL. Without looking at the global behaviors of the
entire system, the performance of the model checker can
be greatly increased, since dealing with a single process
instance is much easier. However, this does not mean
that a single process instance is necessarily simple. In
[13], the model checker was used to test the railroad
crossing benchmark. In those experiments the model
checker failed to complete two of the test cases due to
the extremely large size of the instances. The high complexity
of a single process instance can come from two
sources: the local and global constants used in the instance
and the local and imported variables that constitute
the variable portion of the process instance. For
example, in the Gate process there are 10 global constants
and 6 local constants. These constants are used
to parameterize the specified system, e.g., to specify a
system containing a parameterized number of process
instances as well as a system containing parameterized
timing requirements. The local variables contribute to
the local state and are changed by executing transitions.
Though each process in ASTRAL is modularized, each
process instance does not stand alone. An environment
assumption is typically used to characterize the pattern
of invocations (calls) of exported transitions from the
outside environment. A process instance may also interact
with other process instances through imported variables
that are exported from other process instances.
Since a process's local properties are proved using only
its local assumptions, the process instance must specify
strong enough assumptions (environment clause and imported
variable clause) to correctly characterize the environment
and the behaviors of the imported variables.
In order to guarantee the local properties, it is not unusual
for an assumption to include complex timing requirements
on the call patterns and the imported vari-
ables' change patterns. Thus, the second source of complexity
primarily comes from the history-dependency of
ASTRAL, which expresses that a system's current state
depends upon its past states.
When it is not practical for the symbolic model checker
to complete the search procedure for a complex process
instance, it is desirable to define approximation approaches
to speed up the procedure by sacrificing cov-
erage. Based upon the above analysis, two kinds of
approaches can be used. The first is to assign concrete
values to some of the constants before using the model
checker. In [13], it was shown that doing this will speed
up the model checker and that it is still effective in finding
bugs in some cases. There are, however, reasons for
not using this approach. First, picking the right set of
constant values to cause "interesting" things (especially
potential errors) to happen is not trivial. Some constant
value choices will miss scenarios in which the specification
would fail. Second, even with a number of the
constant values fixed, the model checker is still expensive
in some cases due to the complexity of the behavior
of the local and imported variables. Experience shows
that this approach, as well as using the explicit state
model checker [12], should be used at the earlier stages
in debugging a specification, when errors are relatively
easier to catch. The second approach speeds up the
model checker by enforcing it to check either less nodes
or "small" nodes. These approaches free the user from
setting up constant values. A random walk technique
is used to allow the model checker to randomly skip a
number of branches when traversing the execution tree.
Approximations can also be applied by limiting the image
size of all the reachable states on a node in the
execution tree. Currently, two techniques are provided
for image size reduction. One is partial image which
considers only a subset of the image and uses this sub-set
to calculate the postimage at each node. The other
is dynamic environment generation [14], which generates
different sequences of imported variable values for
different execution paths. Doing this reduces the image
size at a node by restricting the environment (primar-
ily the imported variable part) of a process instance.
These three techniques are discussed in more details in
the following subsections.
Random walk
A path in the execution tree of an ASTRAL process is
a sequence of transitions. Each node in the tree containing
the image of all reachable states from the initial
node along the path. Theoretically, the number of paths
is exponential to the user-assigned search depth. Even
though the symbolic model checker itself adopts a number
of trimming techniques [13], the time for a complete
search for a large specification is unaffordable. It is our
experience that, when a specification has a bug, this bug
can usually be demonstrated by many different paths.
The reasons are (1) The ordering of some transitions can
be switched without affecting the result (though practically
it is hard to detect this, since ASTRAL is history
dependent. 1 ), (2) Most specifications contain a number
of parameterized constants. When a specification has a
bug, usually there are numerous scenarios and choices
of parameterized constant values to demonstrate it, so
these scenarios can be shown by many different paths.
Random walk is an approximation technique of searching
only a portion of the reachable nodes on the execution
tree. Figure 3 shows the recursive procedure,
which is based upon depth-first search. The algorithm
is similar to the procedure proposed in [13] except that
this algorithm includes a random choice when the model
checker moves from one node to its children. In the al-
gorithm, depth indicates the maximal number of iterations
of transitions to check. P ost A , which was defined
earlier, is the postimage operator for the transition indicated
by node A. Model checking a node A starts
by calculating the preimage and postimage of it. If the
postimage is not empty, which means that the transition
is firable, then the preimage is checked with respect
to the property, followed by checking every child node
according to the execution graph 2 and the result of the
random boolean function toss(A; B): The function toss
is not symmetric. The probability of result tail is chosen
as
depth
depth
where numChildren(A) indicates the number of successors
of node A in the execution graph G, i.e.,
indicates the layer where node A is located in the execution
tree. The reason for this choice is to ensure the
1 This is significantly different from some standard techniques
used in finite state model checking, such as the partial order
method [18]
2 In the actual implementation, when A is idle, if the nearest
non-idle ancestor node of A is A 0 , then a non-idle child node B
of A with hA 0 ; Bi 62 RG is not checked. That is, only a non-idle
child which is reachable from the closest non-idle ancestor of A in
the execution graph is checked.
ffl A short violation has less chance to miss. When
A:layer is small, the probability of result tail is
large. When A:layer is large, if numChildren(A) is
greater than 1, then the probability is small. Hence
a longer path has higher probability to be skipped.
ffl When numChildren(A) is 1, 3 the probability of
result tail is 1. That is, a node with only one successor
can not be skipped.
The model checking procedure starts from the initial
node initial, Check(initial; depth):
Boolean Check(Node A, int depth)
f
depth then return true;
else
if A:postimage
if(A:preimage 6' Prop) then return false;
else foreach B, hA; Bi 2 RG and toss(A;
return true;
Figure
3: The model checking algorithm with random
walk
Partial image
In the Omega library, each image is represented by a
union of convex linear constraints. The efficiency of an
image calculation depends upon the number of variables
and the number of constraints. Experience shows that,
when a specification has a bug, there are usually numerous
sets of parameterized constant and variable values
that lead to the bug. These values usually satisfy many
constraints in an image. Thus, considering only a part
of the image will usually still let the model checker find
the bug. As reported in [13], fixing a number of parameterized
constant values increases the speed of the model
checker, since the number of variables in the image is
decreased. This is a special case of the partial image
technique. However, finding the right set of constant
values leading to a potential bug is not easy for complex
specifications; it usually requires a user that thoroughly
understands the specification. The partial image technique
presented in this paper is used without fixing any
constant values, by applying the PartialImage() opera-
tor, which returns only half of the unions for the image.
The algorithm presented in Figure 4 is essentially the
same as the one in [13] except that during the depth first
search the PartialImage() operator is applied on each
preimage on node A. This reduced preimage represents
3 numChildren(A) is always at least one, since each node has
a successor through the idle transition.
the set of reachable states at the node. The approximated
image is then used to calculate the postimage of
the node, as stated in the algorithm. In previous experiments
[13] the two test cases where the symbolic model
checker failed to complete the search procedure were due
to the extremely large size of the instance. The large
number of constants and variables used in the test cases
resulted in an extremely long time (hours as observed
in [13] ) for a single image computation. When represented
in the Omega library, these computations usually
involve images containing hundreds or even thousands
of unions of convex regions. Thus, it is natural to cut the
image size of the reachable states at each node. Doing
this is always sound for the ASTRAL symbolic model
checker, because in ASTRAL, only safety and bounded
liveness properties are specified.
Boolean Check(Node A, int depth)
f
depth then return true;
else
if A:postimage
if(A:preimage 6' Prop) then return false;
else foreach B, hA; Bi 2 RG
return true;
Figure
4: The model checking algorithm with partial
image
Dynamic environment generation
The dynamic environment generation technique used in
this paper is proposed in detail in [14]. ASTRAL is a
history dependent specification language. A technique
is needed to encode the history of an imported variable
when its past values are referenced, since as pointed
out in [13], it is too costly to encode the entire history.
Therefore, a limited window size technique is proposed
in that paper to approximate the entire history by only
a part of it. For example, a window size of two means
that the process instance can only remember an imported
variable's last two change times, and the values
before and after the changes. As observed in [14], the
imported variables and their history encodings are the
main bottleneck of the symbolic model checker, due to
the extra variables introduced for each imported vari-
able. Thus, an environment is characterized by all the
imported variables and their histories inside a given win-
dow. If every variable in the environment has a concrete
value, then the environment is called concrete. The dynamic
environment generation technique effectively generates
a reasonable sequence of concrete environments
for each execution path. The sequence is selected according
to the imported variable clause and the previous
environment. The reader is referred to [14] for the
details of the algorithm.
The techniques used in combination
The three techniques mentioned above can also be used
in combination in a straightforward manner. For exam-
ple, random walk and partial image can be combined
in such a way that the model checker propagates only
part of the reachable states to the children nodes while
it randomly skips a number of branches. Random walk
and dynamic environment generation can also work together
such that along each randomly chosen execution
path a sequence of concrete environments are generated.
Similarly, partial image and dynamic environment generation
can be applied when a part of the image of reachable
states is used to calculate the postimage and they
are also used to generate the concrete environments. In
the following section, the results of running ten mutation
tests of the Gate process using the model checker
with each of the techniques and their combinations are
presented.
Performance comparisons: a case study
All three approximation techniques are integrated into
the ASTRAL symbolic model checker. Since the use
of the symbolic model checker in the ASTRAL SDE
is only for debugging purposes, its effectiveness for detecting
a potential error in a specification is the major
concern. To demonstrate the effectiveness of the
approximation techniques proposed in the last section,
the model checker was run on ten mutations [21] of the
Gate process from the railroad crossing specification.
The reason that the Gate process specification was used
is that it contains imported variables as well as their
histories. These imported variables result in a large instance
of the Gate process for which the symbolic model
checker previously failed to complete [13], when not using
approximation techniques. Each mutation contains
a minor change to the original specification 4 . A detailed
list of all the mutations can be found in Table
1. As pointed out in [21], the mutation techniques can
be used in two ways for formal specifications: they can
help a user understand the specification, and they can
test the strength of a specification. Experience shows
that real-time specifications are hard to write and to
read, especially when they involve complex timing con-
straints. A user can mutate a part of the specification
where he or she believes that such a change should affect
the behavior of the system. If the mutant is killed
(i.e., a violation is found), then a specification level violation
trace is demonstrated. Reading through the trace
4 The unmutated railroad crossing specification has been
proved to be correct using the ASTRAL theorem prover, which is
also part of the ASTRAL SDE [22].
helps the user to quickly figure out where and how the
syntax change affects the specification. If a mutation is
created by weakening an assumption in the specification
and the model checker fails to find any violations, then a
potential weakness is demonstrated in the original spec-
ification. There are two possibilities in this case. One is
that the model checker is not able to find the bug under
this specific run with the specific setup. The other is
that the mutation is equivalent to the original (correct)
specification.
M1 delete the 1st conjunction from the axiom of GATE
M2 delete the 2nd conjunction from the axiom of GATE
M3 delete the 3rd conjunction from the axiom of GATE
M4 delete the term raise dur from the 2nd conjunction
of the axiom of GATE
M5 delete the term up dur from the 3rd conjunction of
the axiom of GATE
M6 delete now-Change(s.train in R)?=RImax-response time
from the 1st conjunction of the schedule of GATE
M7 delete the imported variable clause of GATE
M8 delete now-End(lower)?=lower time from the entry
assertion of transition down
M9 delete now-End(raise)?=raise time from the entry
assertion of transition up
delete ~(position=raising - position=raised) from
the entry assertion of transition raise
Table
1. Ten mutations of the railroad crossing specification
For all of the tests, the constants min speed and
speed were set to 15 and 20, respectively, the constant
tracks was set to 2, and the history window size
was chosen as 2. There were no other user-assigned con-
stants. The maximal search depth was 10. The reason
that both n tracks and the window size were chosen to
be 2 is that this setting demonstrates the effectiveness
of the model checker on an extremely large instance of
the specification.
Table
2 and Table 3 show the results with the three
approximation techniques used separately and used in
combination, respectively. In the tables, each result
contains the number of nodes visited in the execution
tree, the time taken (measured in seconds), and the result
status. The status values are "\Theta" (i.e., the model
checker is able to detect a violation), "
"(i.e., the model
checker finishes and reports no error), or """(i.e., the
model checker fails to finish in a reasonable amount of
time). A number is also attached to the status value to
indicate the actual number of runs of the specific tests
that were performed. For example, "\Theta(2)" means a violation
is found after the second run and within the first
run no error was detected. In this case, the number of
nodes and the time taken are the sum of the two runs.
For each case at most two trials were made. For M3,
M5, M8 and M9, the model checker ran only once. The
reason is that the model checker will not report any errors
for these cases, as discussed below. For comparison,
all the mutations were also run using the earlier symbolic
model checker that did not use the approximation
techniques. The results of these runs are shown under
column "plain" in Table 2. All tests were performed
on a Sun Ultra 1 with 64M main memory and 124M
swap memory. It should be noted that all of the experiments
were independent. That is, before each run of
the model checker, the cache was cleaned 5 . Therefore,
the performances of different runs are comparable.
As observed in [14], among the ten mutations, M8 and
M9 are both correct. Hence the model checker should
not report any error for these cases. M3 and M5 are
two cases that demonstrate a limitation of the symbolic
model checker when it fails to detect an error although
the mutations should be killed. In [14], the explicit
state model checker [12] was used to successively find
the violations under a set of constant values provided
by the specifier. Test results on these live mutants are
still meaningful in that they can be used to show the
node coverage of each approximation technique when
the model checker completes the search procedure. The
remaining six mutations are the ones that the model
checker is able to kill. They are used to demonstrate
the effectiveness of using the model checker to debug a
specification.
A number of observations can be made from the results
shown in Table 2 and Table 3. During the first run,
all six mutations were killed by using partial image and
dynamic environment generation separately. After the
second run, partial image combined with dynamic environment
generation was also able to kill all six muta-
tions. Random walk, either applied separately or combined
with the other two techniques, was not able to
kill the six mutations. M6, M7 and M10 show relatively
short violation traces, while M1, M2 and M4 show long
violation traces. Therefore, we are more interested in
the latter three cases. In the first run, random walk
was able to kill three mutations on average. After the
second run, however, it could not kill M1 for each of
its three uses. The time used in finding an error is
2 to 4 times faster on average than the time without
using approximation techniques. Even when the error
was detected after the second run, the total time used
in the two runs is still 2 times faster on average. One
can also notice that using the techniques in combination
could speed up the procedure further, though the
ratio is not especially high, and on occasion it is worse.
The reason is that using the techniques in combination
sacrifices more coverage. Therefore, it has less chance of
detecting an error in the first run. Consider the results
on the live mutants M3, M5, M8 and M9 mentioned
above. On these mutations, random walk has the least
node coverage especially when applied in combination
with the other two techniques. In contrast, both partial
image and dynamic environment generation have much
cleaning the cache, 4 to 10 times speed up is usually
observed. However, it was noticed that the approximation techniques
used in this paper can also degrade cache performance.
partial image dynamic env random walk plain
cases nodes time result nodes time result nodes time result nodes time result
M4 28 1,694 \Theta(1) 23 1,726 \Theta(1) 104 5,069 p (2) 23 10,168 \Theta(1)
(1) 91 3,173 p
Table
2. Experiments: the approximation techniques used separately
cases nodes time result nodes time result nodes time result
Table
3. Experiments: the approximation techniques used in combination
higher node coverage, even when applied in combina-
tion. However, it is unknown what the total number
of reachable nodes is, since the symbolic model checker
failed to complete the searching procedure for all four
of these mutations.
From the above analysis, one can conclude that all of
the approximation approaches are effective. They are
able to kill at least half of the mutations in a much
shorter time during the first run, while they can finish
the procedure in a reasonable amount of time for live
mutants. For this specific set of tests, partial image and
dynamic environment generation are the most effective.
They are fast to detect an error, and when no error is
reported, they attain a high node coverage. Random
walk performs slightly worse than the other two tech-
niques. The reason is that random walk didn't reach a
high node coverage as shown by the results on the live
mutations. Along each execution path, the image calculations
are still very expensive since a node must propagate
a full reachable image. However, compared to the
model checker without approximation techniques, the
performance in detecting a violation is still much faster.
5 Conclusions and Future Work
In this paper, three approximation techniques for using
the ASTRAL symbolic model checker as a specification
debugger were introduced. The techniques are random
walk, partial image and dynamic environment genera-
tion. The random walk technique is used to allow the
model checker to randomly skip a number of branches
when traversing the execution tree. The partial image
technique considers only a subset of the image and uses
this subset to calculate the postimage at each node. The
dynamic environment generation technique [14] generates
different sequences of imported variable values for
different execution paths. The three techniques were
applied separately and in combination to ten mutation
tests on the Gate process in the railroad crossing benchmark
specification. All the techniques are effective in
finding bugs.
Besides the techniques discussed in this paper, we believe
that there are many other applicable approximation
approaches. Unlike other model checkers, the ASTRAL
model checker is primarily intended for use as
a specification debugger. Once the fixed point computation
is out of the question, numerous approximation
techniques that already exist in the testing area can also
be investigated. We believe the techniques proposed in
this paper will also be useful in model checkers using
different specification languages as long as only safety
properties are considered. For debugging a general temporal
property formulated in a temporal logic, it is still
unknown how well these approximation approaches will
work. This is an area for further research. The coverage
analysis in this paper is empirical. The factors
considered are time and number of nodes. Another issue
to be investigated is what metrics can be used to
systematically measure path and/or node coverage for
a specific approximation technique applied on an execution
tree. This is a challenging topic, since sometimes
the symbolic model checker without using the approximation
techniques fails to complete the entire search.
Therefore, a theoretical estimation is urgently needed.
The authors would like to thank T. Bultan and P.
Kolano for many insightful discussions. The ten mutations
tested in this paper were created as a result of
discussions among T. Bultan, P. Kolano and the authors
well before this paper was written. The specification
was written by P. Kolano.
--R
"Hybrid specification of control systems,"
"Hardware specification using the assertion language ASTRAL,"
"Verifying Systems with Integer Constraints and Boolean Pred- icates: A Composite Approach."
"Symbolic Model Checking of Infinite State Systems Using Presburger Arithmetic,"
"Model Checking Concurrent Systems with Unbounded Integer Variables: Symbolic Representations, Approximations and Experimental Results."
"Deductive Verification of Real-time Systems using STeP,"
"STeP: Deductive-Algorithmic Verification of Reactive and Real-time Systems."
"Auto- matically closing open reactive programs,"
"Specification of real-time systems using ASTRAL,"
"A formal framework for ASTRAL intralevel proof obligations,"
"Using the ASTRAL model checker for cryptographic protocol analysis,"
"Using the ASTRAL model checker to analyze Mobile IP,"
"A Symbolic Model Checker for Testing ASTRAL Real-time specifica- tions,"
"Dynamic Environment Generations for an ASTRAL Process,"
"A report on random testing,"
"Quantifying software validity by sampling,"
"Model checking for programming languages using VeriSoft,"
" A partial approach to model checking. "
"The generalized railroad crossing: a case study in formal verification of real-time systems,"
"Proof Assistance for Real-Time Systems Using an Interactive Theorem Prover,"
" Mutation tests for ASTRAL real-time spec- ifications,"
"The design and analysis of real-time systems using the ASTRAL software development environment,"
"The Omega test: a fast and practical integer programming algorithm for dependence analy- sis,"
--TR
A partial approach to model checking
Model checking for programming languages using VeriSoft
Specification of realtime systems using ASTRAL
Verifying systems with integer constraints and Boolean predicates
Automatically closing open reactive programs
Using the ASTRAL model checker to analyze mobile IP
Model-checking concurrent systems with unbounded integer variables
The design and analysis of real-time systems using the ASTRAL software development environment
A Formal Framework for ASTRAL Intralevel Proof Obligations
Symbolic Model Checking of Infinite State Systems Using Presburger Arithmetic
Deductive Verification of Real-Time Systems Using STeP
Proof Assistance for Real-Time Systems Using an Interactive Theorem Prover
A report on random testing
A Symbolic Model Checker for Testing ASTRAL Real-Time Specifications
Dynamic Environment Generations for an ASTRAL Process
--CTR
Bernard Boigelot , Louis Latour, Counting the solutions of Presburger equations without enumerating them, Theoretical Computer Science, v.313 n.1, p.17-29, 16 February 2004
Zhe Dang , Tevfik Bultan , Oscar H. Ibarra , Richard A. Kemmerer, Past pushdown timed automata and safety verification, Theoretical Computer Science, v.313 n.1, p.57-71, 16 February 2004
Zhe Dang , Oscar H. Ibarra , Richard A. Kemmerer, Generalized discrete timed automata: decidable approximations for safety verification, Theoretical Computer Science, v.296 n.1, p.59-74, 4 March | formal methods;ASTRAL;real-time systems;state machines;formal specification and verification;timing requirements;model checking |
337222 | Light-weight context recovery for efficient and accurate program analyses. | To compute accurate information efficiently for programs that use pointer variables, a program analysis must account for the fact that a procedure may access different sets of memory locations when the procedure is invoked under different callsites. This paper presents light-weight context recovery, a technique that can efficiently determine whether a memory location is accessed by a procedure under a specific callsite. The paper also presents a technique that uses this information to improve the precision and efficiency of program analyses. Our empirical studies show that (1) light-weight context recovery can be quite precise in identifying the memory locations accessed by a procedure under a specific call-site and (2) distinguishing memory locations accessed by a procedure under different callsites can significantly improve the precision and the efficiency of program analyses on programs that use pointer variables. | INTRODUCTION
Software development, testing, and maintenance activities
are important but expensive. Thus, researchers
have investigated ways to provide software tools to improve
the e#ciency, and thus reduce the cost, of these
activities. Many of these tools require program analyses
to extract information about the program. For exam-
ple, tools for debugging, program understanding, and
impact analysis use program slicing (e.g., [5, 6, 15]) to
focus attention on those parts of the software that can
influence a particular statement. To support software
engineering tools e#ectively, a program analysis must be
su#ciently e#cient so that the tools will have a reason-able
response time or an acceptable throughput. More-
over, the program analysis must be su#ciently precise
so that useful information will not be hidden within spurious
information.
1. int x;
2. f(int* p) {
3.
4. }
5. f1(int* q) {
6. int
7. f(&z);
8. f(q);
9. }
10. int
11. main() {
12. int
13.
14.
15. f1(&y);
17. printf("%d",w);
18. }
Figure
1: Example Program.
Many program analyses can e#ectively compute program
information for programs that do not use pointer
variables. However, when applying these techniques to
programs that use pointer variables, several issues must
be considered. First, in programs that use pointer vari-
ables, two di#erent names may reference the same memory
location at a program point. For example, in the
program in Figure 1, both pointer dereference *p and
variable name y can reference the memory location for
y at statement 3. This phenomenon, called aliasing,
must be considered when computing safe program in-
formation. For example, without considering the e#ects
of aliasing, a program analysis would ignore the fact
that y is referenced by *p in statement 3 and thus, conclude
incorrectly that procedure f() does not modify y.
Second, in programs that use pointer variables, a procedure
can access di#erent memory locations through
pointer dereferences when the procedure is invoked at
di#erent callsites. For example, f() accesses y and x
when it is called by statement 8, but it accesses w and x
when it is called by statement 16. A program analysis
that cannot distinguish the memory locations accessed
by the procedure under the context of a specific callsite
might compute spurious program information for this
callsite. For example, a program analysis might report
that y, z, and w are modified by f() when f() is called
by statement 8. Furthermore, a program analysis that
propagates the spurious program information through-out
the program will be unnecessarily ine#cient.
Many existing techniques (e.g., [2, 8, 12, 14]) compute
safe program information by accounting for the e#ects
of aliasing in the analysis. However, only a few of these
techniques (e.g., [8, 12]) can distinguish memory locations
accessed by a procedure under the context of specific
callsites. These techniques use conditional analysis
that attaches conditions to the information generated
during program analysis. For example, *p references w
at statement 3 (Figure 1) if *p is aliased to w at the entry
to f(). Thus, a program analysis reports that w is
modified by statement 3 under this condition. To determine
whether w is modified by f() when statement 8 is
executed, the program analysis determines whether *p
is aliased to w at the entry to f() by checking the alias
information at statement 8. Because *q is not aliased
to w at statement 8, *p is not aliased to w at the entry
to f() when f() is invoked at statement 8. Thus, the
program analysis reports that w is not modified when
statement 8 is executed.
Although a technique using conditional analysis can distinguish
memory locations accessed by a procedure under
specific callsites, and thus avoid computing spurious
program information, it can be ine#cient. First, it requires
conditional alias information, which currently can
be provided only by expensive alias-analysis algorithms
(e.g., [7]). Second, it can increase the cost of computing
the program information. For example, without using
conditional analysis, the complexity of computing inter-procedural
reaching definitions is O(n 2 v) where n is the
size of the program and v is the number of names that
references memory locations whereas using conditional
analysis, the complexity is O(n
To compute accurate program information without using
expensive conditional alias information or adding
complexity to a program analysis, we develop a new
approach that has two parts. First, by examining the
ways in which memory locations are accessed in a proce-
dure, it e#ciently identifies the set of memory locations
that can be accessed by the procedure under a specific
callsite. Second, it uses this set of memory locations
to reduce the spurious information propagated from the
procedure to the callsite or from the callsite to the procedure
To e#ciently identify the memory locations that can be
accessed by a procedure under a specific callsite, we developed
a technique, light-weight context recovery. This
technique is based on the observation that, under the
context of a callsite, a formal parameter for a procedure
typically points to the same set of memory loca-
tions, throughout the procedure, as the actual parameter
to which it is bound if the formal parameter is a
pointer. Given a memory location that is accessed exclusively
through the pointer dereferences of this formal
parameter, such memory location can be accessed by
the procedure under a specific callsite only if the memory
location can be accessed at the callsite through the
pointer dereferences of the actual parameter to which
it is bound. This observation allows the technique to
identify the memory locations that are accessed by a
procedure under a specific callsite.
To reduce the spurious information propagated across
procedure boundaries by program analyses, we also developed
a technique that uses the information about
memory locations computed by light-weight context re-
covery. At each callsite, program information about a
memory location that is not identified as being accessed
by the called procedure need not be propagated from the
callsite to the called procedure or from the called procedure
to the callsite. Thus, this technique can improve
the precision and the e#ciency of program analyses.
This paper presents our new first presents
a light-weight context recovery algorithm (Section 2); it
then illustrates, using interprocedural slicing [15], the
technique that uses information provided by the light-weight
context recovery to improve program analyses
(Section 3). The main benefit of our approach is that
it is e#cient: it can use alias information provided by
e#cient alias analysis algorithms, such as Liang and
Harrold's [9] and Andersen's [1]; the light-weight context
recovery is almost as e#cient as modification side-
e#ects analysis; using information provided by light-weight
context recovery in a program analysis adds little
cost to the program analysis. A second benefit of
our approach is that, in many cases, it can identify a
large number of memory locations whose information
need not be propagated to specific callsites. Thus, it
can provide significant improvement in both the precision
and the e#ciency of the program analysis. A third
benefit of our approach is that it is orthogonal to many
other techniques that improve the e#ciency of program
analyses. Thus, it can be used with those techniques to
improve further the e#ciency of program analyses.
This paper also presents a set of empirical studies in
which we investigate the e#ectiveness of using light-weight
context recovery to improve the e#ciency and
the precision of program analyses (Section 4). These
studies show a number of interesting results:
- For many programs that we studied, the light-weight
context recovery algorithm computes, with relatively little
increase in cost, a significantly smaller number of
memory locations as being modified at a callsite than
that computed by the traditional modification side-
e#ect analysis algorithm.
- For several programs, using alias information provided
by Liang and Harrold's algorithm [9] or Andersen's algorithm
[1], the light-weight context recovery algorithm
reports almost the same modification side-e#ects at a
callsite as Landi, Ryder, and Zhang's algorithm [8],
which must use conditional alias information.
- Using information provided by the light-weight context
recovery can reduce the sizes of slices computed using
the reuse-driven slicing algorithm [5] and the time
required to compute such slices.
This section first gives some definitions and then
presents the light-weight context recovery algorithm.
Definitions
Memory locations in a program are referenced through
object names; an object name consists of a variable and
a possibly empty sequence of dereferences and field ac-
cesses. If an object name contains no dereferences, then
the object name is a direct object name. Otherwise, the
object name is an indirect object name. For example,
x.f is a direct object name, whereas *p is an indirect
object name. A direct object name obj represents a
memory location, which we refer to as L(obj).
We say object name N 1 is an extension of object name
constructed by applying a possibly empty
sequence of dereferences and field accesses # to N 2 ; in
this case, we denote N 1 as E#N 2 #. We refer to N 2 as
a prefix of N 1 . If # is not empty, then N 1 is a proper
extension of N 2 , and N 2 is a proper prefix of N 1 . If
N is a formal parameter and a is the actual parameter
that is bound to N at callsite c, we define a function
A c (E #N#) that returns object name E#a#.
For example, suppose that r is a pointer that points to
a struct with field f . Then E #r# is #r, E #.f #r# is (#r).f ,
and r is a proper prefix of *r. If r is a formal parameter
to function F and *q is the actual parameter bound to
r at a callsite c to F , then A c (E #.f #r#) is (*q).f .
Given an object name obj and a statement s, an alias
analysis can determine a set of memory locations that
may be aliased to obj at s. We refer to this set as obj's
accessed set at s, and denote this set as ASet(obj, s).
For example, in Figure 1, ASet(#p,
Landi and Ryder's algorithm [7] is used. Given a memory
location loc and a procedure P , the name set of loc
in P contains the object names that are used to reference
loc in P . For example, the name set for y in f()
Figure
1) is {*p}. A memory location loc supports object
name obj at statement s if the value of loc may
be used to resolve the dereferences in obj at s. For ex-
ample, suppose that q points to r and r points to x at
statement s. Then r supports *q at s.
Light-Weight Context Recovery
To identify memory locations accessed by a procedure
P under a specific callsite, light-weight context recovery
considers the nonlocal memory locations in P . If the
name set of a nonlocal memory location loc contains a
direct object name, then the technique reports that loc
is accessed under each callsite to P . If the name set
of loc contains a single indirect object name, then the
technique computes additional information to determine
whether loc can be accessed under a specific callsite to
P . In other cases, for e#ciency, the technique assumes
that loc is accessed under each callsite to P .
Suppose that indirect object name obj is the only object
name in the name set of nonlocal memory location
loc in procedure P . Then, if loc is referenced in P , it
must be referenced through obj. If none of the memory
locations supporting obj at statements in P is modified
in P , then when P is executed, obj references the same
memory location at each point in P . In this case, if obj
is an extension of a formal parameter, then when P is
called at callsite c, obj must reference the same memory
location as the one referenced by A c (obj) at c. Thus, if
loc is referenced in P under c, loc must be referenced by
A c (obj) at c. During program analysis, we must propagate
the information for loc from P to c only if A c (obj)
is aliased to loc at c. This property of memory locations
gives us an opportunity to avoid propagating (i.e.,
filter) some spurious information when the information
is propagated from the procedure to a callsite during
program analyses. We say that, if loc has this property
in P , then loc is eligible to be filtered under callsites to
P , and we say that loc is a candidate. More precisely,
loc is a candidate in P if the following conditions hold.
Condition 1: loc's name set in P contains a single
indirect name obj.
Condition 2: obj is a proper extension of a formal
parameter to P .
Condition 3: the memory locations supporting obj
at statements in P are not modified in P .
For example, in Figure 1, the name set of w in f() contains
only *p, a proper extension of formal parameter
p. The value of p does not change in f(). Thus, w is
a candidate in f(). Because A 16 (#p) is w, we need to
propagate the information for w from f() to statement
16. However, because A 7 (#p) is z, we need not propagate
the information for w from f() to statement 7.
Light-weight context recovery processes the procedures
in a reverse topological (bottom-up) order on the call
graph 1 to identify the memory locations that are candidates
in each procedure. Figure 2 shows algorithm
ContextRecovery that performs the processing. For a
nonlocal memory location loc referenced in procedure
ContextRecovery computes a mark, MarkP [loc],
whose value can be unmarked (U), eligible ( # ), or
ineligible(-). By default, MarkP [loc] is initialized to
U. If loc is a candidate in P , then MarkP [loc] is # . In
this case, if obj is the object name in loc's name set in
ContextRecovery stores obj in OBJP [loc]. If
loc is not a candidate in P , then MarkP [loc] is - and
OBJP [loc] is #. ContextRecovery also sets MODP [loc]
to true if loc is modified by a statement in P so that it
can check whether the memory locations supporting an
object name have been modified in P .
ContextRecovery examines the way loc is accessed in
P , and calls Update() at various points (e.g., lines 5 and
algorithm ContextRecovery(P)
input P: a program
global MarkP : maps memory locations in procedure P to marks
whether a memory location is modified
memory locations to object names
declare list of procedures sorted in reverse topological order
begin ContextRecovery
1. foreach procedure P do /*Intraprocedural phase*/
2. foreach statement s in P do
3. foreach object name obj in s do
4. if obj is direct then
5. Update(P ,L(obj),F,#)
6. else
7. foreach memory location loc in ASet(obj, s) do
8. Update(P ,loc,T,obj)
9. endfor
10. endif
11. endfor
12. set MODP for memory locations modified at s
13. endfor
14. add P to W
15. endfor
16. while W # do /*Interprocedural phase*/
17. take the first procedure P from W
18. foreach callsite c to R in P do
19. foreach nonlocal memory loc referenced in R do
20. if MarkR [loc] # then
21. Update(P ,loc,F,#)
22. elseif loc#ASet(Ac (OBJR [loc]), c) then
23. if Ac (OBJR [loc]) is indirect then
24. Update(P ,loc,T,Ac (OBJR [loc]))
25. else
26. Update(P ,loc,F,#)
27. endif
28. endif
29. update MODP [loc]
30. endfor
31. endfor
32. validate object names in OBJP with MODP
33. if MarkP or MODP updated then add P 's callers to W
34. endwhile
ContextRecovery
procedure Update(Q,l,m,o)
input Q is a procedure, l is a memory location,
m is a boolean, and o is an object name or #
global MarkQ : marks for memory locations in Q
array of object names
begin Update
36. MarkQ [l] := -; OBJQ [l] := #
38. if o is an extension of a formal then
39. MarkQ [l] := # ; OBJQ [l] :=
40. else
41. MarkQ [l] := -; OBJQ [l] := #
42. endif
43. elseif m =T and MarkQ
44. if o#=OBJQ [l] then
45. MarkQ [l] := -; OBJQ [l] := #
47. endif
Update
Figure
2: Algorithm identifies candidate memory locations.
to compute MarkP [loc] and OBJP [loc]. Update()
inputs procedure Q, a memory location l, a boolean
flag m, and an object name o. When ContextRecovery
detects that a memory location is accessed at a statement
in P , if the memory location is accessed through
an indirect object name, the algorithm calls Update()
with m as T (true). Otherwise, if the memory location
is not accessed through an indirect object name, the
algorithm calls Update() with m as F (false).
Update() sets the values for MarkQ [l] and OBJQ [l] according
to m and the current value of MarkQ [l]. If m
is F, then l is not a candidate in Q because condition
1 is violated. Thus, MarkQ [l] is updated with - and
OBJQ [l] is updated with # (line 36). If m is T and the
current value of MarkQ [l] is U, then Update() checks
to see whether o is an extension of a formal parameter
to Q (lines 37-38). If so, then l is a candidate according
to the information available at this point of computa-
tion. Thus, MarkQ [l] is updated with # and OBJQ [l]
is updated with Otherwise, l is not a candidate
because condition 2 is violated. Thus, MarkQ [l]
is updated with - and OBJQ [l] is updated with # (line
41). If m is T and MarkQ [l] is # , then Update() checks
whether o is the same as OBJQ [l] (lines 43-44). If they
are not the same, then l can be accessed in Q through
more than one object name. This violates condition
1. Thus, l is not a candidate and MarkQ [l] is updated
with - and OBJQ [l] is updated with # (line 45). For the
other cases, MarkQ [l] and OBJQ [l] remain unchanged.
ContextRecovery computes Mark, OBJ , and MOD
using an intraprocedural phase and an interprocedural
phase. In the intraprocedural phase, ContextRecovery
processes the object names appearing at each statement
s in a procedure P (lines 2-13). If an object name obj
is direct, ContextRecovery calls Update(P ,L(obj),F,#)
(lines 4-5). If obj is indirect, then for each memory
location loc in ASet(obj, s), ContextRecovery calls
(lines 7-9). For each memory location
l that is modified at s, ContextRecovery also sets
MODP [l] to be true (line 12).
For example, when ContextRecovery processes statement
3 in f() in Figure 1, it checks *p. *p is an indirect
name and ASet(#p, Thus, the
algorithm calls Update(f,y,T,*p), Update(f,z,T,*p),
and Update(f,w,T,*p). ContextRecovery also checks
x at statement 3. Because x is a direct name,
ContextRecovery calls Update(f, x,F,#). After f() is
processed, Mark f and OBJ f have the following values:
Mark f
Mark f
Mark f
In the interprocedural phase, ContextRecovery processes
each callsite c to procedure R in a procedure P
(lines 16-34) using the worklist W . For each nonlocal
memory location loc referenced in R, ContextRecovery
checks MarkR [loc] (line 20). If MarkR [loc] is not
# , then ContextRecovery assumes that loc is accessed
by R under each callsite to R, including c.
Thus, ContextRecovery calls Update(P ,loc,F,#) to indicate
that loc is accessed by R under c in some
unknown way (line 21). Otherwise, if MarkR [loc]
is # , then ContextRecovery checks whether loc is
in ASet(A c (OBJR [loc]), c) (line 22). If loc is in
then loc is referenced by R under
c through object name A c (OBJR [loc]). Context-
Recovery calls Update(P ,loc,T,A c (OBJR [loc])) if
A c (OBJR [loc]) is indirect, and calls Update(P ,loc,F,#)
if A c (OBJR [loc]) is direct (lines 23-27). If loc is not in
loc is not referenced by
R under c. ContextRecovery does nothing in this case.
In the interprocedural phase, when a memory location
loc is processed, if MODR [loc] is true, then MODP [loc]
is also set to true (line 29).
For example, when ContextRecovery processes the
callsite to f() at statement 8 (Figure 1), it first
checks x. Mark f [x] is -. Thus, the algorithm
calls Update(f1,x,F,#). The algorithm then checks y.
Mark f [y] is # . The algorithm checks whether y is in
Thus, the algorithm calls Update(f1,y,T,*q). The algorithm
finally checks z and w and does nothing because
z and w are not in ASet(#q, 8). The values for Mark f1
and OBJ f1 change from
Mark f1
Mark f1
to Mark f1
Mark f1
after statement 8 is processed.
In the interprocedural phase, ContextRecovery also
validates the indirect object names appearing in OBJP
to make sure that the memory locations supporting
such indirect names in P are not modified in P (line
32). Suppose that indirect name obj is assigned to
OBJP [loc]. If there is a memory location l that supports
obj at a statement in P such that MODP [l] is
true, then loc is ineligible because condition 3 is vi-
olated. Thus, ContextRecovery updates MarkP [loc]
with - and OBJP [loc] with #.
After ContextRecovery processes P in the inter-procedural
phase, if MODP or MarkP is updated,
then the algorithm puts P 's callers in W (line
33). ContextRecovery continues until W becomes
empty.
Table
1 shows the results computed by
ContextRecovery for the example program (Figure 1).
Mark OBJ MOD
Mark f
Mark f
Mark f
Mark f
Mark f1
Mark f1
Markmain
Markmain
Table
1: Mark, OBJ , and MOD for example program.
Given that n is the size of the program, the complexity
of ContextRecovery is O(n 2 ) in the absence of recursion
and O(n 3 ) in the presence of recursion. The complexity
of ContextRecovery is the same as the algorithm for
computing modification side e#ects for the procedures.
TO IMPROVE SLICING
This section shows how the information computed using
ContextRecovery can improve interprocedural slicing.
Other program analyses, such as computing interprocedural
reaching definitions and constructing system-
dependence graphs, can be improved in a similar way.
Interprocedural Slicing
Program slicing is a technique to identify statements in
a program that can a#ect the value of a variable v at
a statement s (#s, v# is called the slicing criterion) [15].
Program slicing can be used to support tasks such as
debugging, regression testing, and reverse engineering.
One approach for program slicing first computes data
and control dependences among the statements and
builds a system-dependence graph, and then computes
the slice by solving a graph-reachability problem on this
graph [6]. Other approaches, such as the one used in
the reuse-driven interprocedural slicing algorithm [5],
use precomputed control-dependence information, but
compute the data-dependence information on demand
using control-flow graphs 2 (CFGs) for the procedures.
We use the reuse-driven slicing algorithm as an example
to show how a program analysis can be improved using
information provided by light-weight context recovery.
The reuse-driven slicer computes an interprocedural
slice for criterion #s, v# by invoking a partial slicer on
the procedures of the program. The reuse-driven slicer
first invokes the partial slicer on P s , the procedure that
contains s, to identify a subset of statements in P s or
in procedures called by P s and a subset of inputs to P s
that may a#ect v at s. We refer to s and v as the partial
slicing standard used by the partial slicer and denote it
as [s, v]. We refer to the subset of statements identified
by the partial slicer as a partial slice with respect to
[s, v]. We also refer to the subset of inputs identified by
the partial slicer as relevant inputs with respect to [s, v].
When P s is not the main function of the program, the
statements in procedures that call P s might also a#ect
the slicing criterion #s, v# through the relevant inputs
of [s, v]. Therefore, after P s is processed, for each call-site
c i that calls P s , the reuse-driven slicer binds each
relevant input f back to c i and creates a new partial
slicing standard [c i , a i ], given that a i is bound to f at
c i . The reuse-driven slicer then invokes the partial slicer
on [c i , a i ] to identify the statements in P c i
that should
be included in the slice. The algorithm continues until
no additional partial slicing standards can be generated.
The algorithm returns the union of all partial slices computed
by the partial slicer as the program slice for #s, v#.
Figure
3 shows the call graph for the program in Figure
[15,x] [16,x]
f
main
Figure
3: Call graph annotated with partial slicing standards
for #3, x#; solid lines show graph edges; dotted lines
show relationships among partial slicing standards.
1. The graph is annotated with partial slicing standards
created by the reuse-driven slicer to compute the
slice for #3, x#. The reuse-driven slicer first invokes the
partial slicer on f() with respect to [3, x]. The partial
slicer computes partial slice {3} and relevant input
set {x}. After f() is processed, the reuse-driven slicer
creates new partial slicing standards [7, x] and [8, x] for
the callsites to f() in f1() and partial slicing standard
[16, x] for the callsite to f() in main(), and invokes the
partial slicer on these standards. The partial slicer computes
relevant input set {x} for [7, x] and [8, x]. After
the partial slicer finishes processing [7, x] or [8, x], the
reuse-driven slicer further creates partial slicing standard
[15, x] for the callsite to f1() in main(), and invokes
the partial slicer on this standard. The resulting
slice for #3, x# is {3, 7, 8, 13, 15, 16}, the union of all partial
slices computed during the processing.
The partial slicer computes the partial slice for standard
[s, v] by propagating memory locations backward
throughout P s using P s 's CFG. For each node N in the
CFG of P s , the partial slicer computes two sets of memory
locations: IN [N ] at the entry of N ; OUT [N ] at the
exit of N . IN [N ] is computed using OUT [N ] and information
about N . OUT [N ] is computed as the union of
the IN [] sets of N 's CFG successors. The partial slicer
iteratively computes IN [] and OUT [] for each node in
fixed point is reached. The formal parameters
and nonlocal memory locations in IN [] at P s 's entry are
the relevant inputs with respect to [s, v].
When N is not a callsite, the partial slicer computes
considering OUT [N ] and those memory locations
whose values are modified or used at N . If memory
locations in OUT [N ] can be modified at N , then N and
statements on which N is control dependent are added
to the slice. When N is a callsite to procedure Q, the
partial slicer must process Q to compute IN [N ] and to
identify the statements in Q for inclusion in the partial
slice.
Figure
4 shows ProcessCall(), the procedure
that processes a callsite c to Q.
ProcessCall() uses a cache to store the partial slice
and the relevant inputs for each partial slicing standard
created by the reuse-driven slicer. For each memory location
u in OUT [c], ProcessCall() binds u to u # in
(line 2). If u # is not modified by Q or by proce-
procedure ProcessCall(c, IN,OUT )
input c: a call node that calls Q
: the set OUT [c]
output IN : the set IN [c]
globals cache[s, v]: pair of (pslice, relInputs)
previously computed by ComputePSlice on [s, v]
begin ProcessCall
1. foreach u in OUT do
2.
3. if u # is not modified by Q or pocedures called by Q then
4.
5. else
6. if cache[Q.exit, u # ] is NULL then
7. cache[Q.exit,
8. endif
9. add cache[Q.exit, u # ].pslice to the slice
11. endif
12. endfor
Figure
4: Procedure processes callsites using caching.
dures called by Q, ProcessCall() simply adds u to
IN [c] (lines 3-4). Otherwise, ProcessCall() creates a
partial slicing standard [Q.exit, u # ], in which Q.exit is
the exit of Q. ProcessCall() then checks the cache
against [Q.exit, u # ] (line 6). If the cache does not contain
information for [Q.exit, u # ], then ProcessCall()
invokes ComputePSlice() on [Q.exit, u # ], and stores
the partial slice and the relevant inputs returned by
ComputePSlice() in the cache (line 7). ProcessCall()
then merges the partial slice with the program slice (line
9), and calls BackBind() (not show) to bind the relevant
inputs back to c and adds them to the IN [c] (line
10). After c has been processed, if some statements in
are included in the slice, then c and statements on
which c is control dependent are added to the slice.
For example, to compute the slice for #17, w#, the slicer
first propagates w from statement 17 to OUT [16]. Because
statement 16 is a callsite, the slicer propagates
w into f() and creates a new partial slicing standard
[4, w]. The slicer then invokes the partial slicer on
[4, w], and computes partial slice {3} and relevant inputs
{x,y,z,w,p}. The slicer binds x, y, z, w, and p
back to statement 16 and puts x,y,z, and w in IN[16].
The slicer keeps processing and adds statements 3, 6, 7,
and 17 to the slice.
Interprocedural Slicing Using Information Provided
by Light-Weight Context Recovery
The precision and the e#ciency of the reuse-driven slicer
can be improved if it can identify the set of memory locations
modified by a procedure under a specific callsite.
To do this, before the slicer propagates a memory location
from the callsite to the called procedure, it first
checks whether the memory location can be modified
by the procedure under this callsite. If the memory location
cannot be modified by the procedure under this
callsite, the reuse-driven slicer does not propagate the
memory location into the called procedure. Similarly,
the precision and the e#ciency of the reuse-driven slicer
can be improved if it can identify the set of memory loca-
procedure ProcessCall(c, IN,OUT )
input c: a call node that calls Q
: the set OUT [c]
output IN : the set IN [c]
globals cache[s, v]: pair of (pslice, relInputs)
previously computed by ComputePSlice for [s, v]
begin ProcessCall
1. foreach u in OUT do
2'. if u is not modified by Q at c then
3'.
4'. else
5'.
7. cache[Q.exit,
8. endif
9. add cache[Q.exit, u # ].pslice to the slice
11. endif
12. endfor
function BackBind(eV ars, c, P )
input eV ars: memory locations reaching the entry of P
c: a call node that calls P
output memory locations at callsite
begin BackBind
13. foreach memory location l in eV ars do
14. if l is formal parameter then
15. add memory locations bound to l at c into CalleeV ars
16. elseif l is referenced by P at c then /*old: 16. else */
17. add l to into CalleeV ars
18. enif
19. endfor
20. return CalleeV ars
BackBind
Figure
5: ProcessCall() (modified) and BackBind().
tions referenced by a procedure under a specific callsite.
The slicer propagates, from the called procedure to a
callsite, only the memory locations that are referenced
under the callsite. Both improvements can reduce the
spurious information propagated across the procedure
boundaries, and thus can improve the precision and efficiency
of the reuse-driven slicer.
For example, consider the actions of the reuse-driven
slicer for #17, w# (Figure 1), if the two improvements,
described above, are made. The improved slicer first
propagates w from statement 17 to statement 16. Because
modifies w when it is invoked by statement
16, the improved slicer propagates w from statement 16
into f(), and creates partial slicing standard [4, w]. The
improved slicer then invokes the partial slicer on [4, w],
adds statement 3 to the partial slice, and identifies x, y,
z, w, and p as the relevant inputs. The improved slicer
checks statement 16 and finds that only x and w can be
referenced when f() is invoked by statement 16. Thus,
it adds only x and w to IN[16]. The improved slicer
further propagates x and w to OUT [15]. Because f1()
modifies neither x nor w when it is invoked at statement
15, the improved slicer propagates x and w directly to
IN[15], without propagating them into f1(). The improved
slicer continues and adds statements 3, 12, 13,
16, 17 to the slice. This example shows that using specific
callsite information can help the reuse-driven slicer
compute more precise slices.
We modify ProcessCall() (Figure 4) to use the set
of memory locations that are modified by a procedure
under a specific callsite to reduce the spurious information
propagated from a callsite to the called procedure.
Figure
5 shows the modified ProcessCall() (lines 2'-5'
replace lines 2-5 in the original version). For each u in
OUT [c], the new ProcessCall() first checks whether
u is modified by Q at c (line 2'). If u is not modified
by Q at c, then the new ProcessCall() adds u to
IN [c] (line 3'). If u is modified by Q at c, then the
new ProcessCall() binds u to u # in Q, creates partial
slicing standard [Q.exit, u # ], and continues the computation
in the usual way (lines 5'-10). Because u being
modified by Q at c implies that u # is modified by Q, the
new ProcessCall() need not check u # .
We also modify BackBind() to use the set of memory
locations that are referenced by a procedure under a
specific callsite to reduce the spurious information propagated
from a procedure to its callsites. Figure 5 shows
the modified BackBind() in which line 16 has been
changed. BackBind() checks each memory location l
in its input eV ars (line 13). If l is a formal parameter,
BackBind() adds the memory locations that are bound
to l at c to CalleeV ars (lines 14-15). Otherwise, the
new BackBind() checks whether l is referenced when P
is invoked at c (line 16). If so, then BackBind() puts
l in CalleeV ars (line 17). Finally, BackBind() returns
CalleeV ars (line 20).
We use MarkP , OBJP , and MODP computed by
ContextRecovery to determine the memory locations
that can be modified by P under callsite c. 3 For a
nonlocal memory location loc in P , if MODP [loc] is
true and MarkP [loc] is -, then loc may be modified
by P under each callsite to P , including c. If
MODP [loc] is true and MarkP [loc] is # , and if loc is in
then loc can be modified by P
under c. Otherwise, loc is not modified by P under c.
For example, according to the result in Table 1,
Mark f
Thus, f() modifies y when f() is invoked at statement
8 in
Figure
contains y. However, f() does not modify y when f()
is invoked at statement 16 because ASet(A
(i.e., ASet(w, 16)) does not contain y.
We use a similar approach to determine the memory
locations that can be referenced by P when P is invoked
from c. For a memory location loc, if MarkP [loc]
is -, then loc can be referenced by P under c. If
MarkP [loc] is # and OBJP [loc] is obj, and if loc is
in ASet(A c (obj), c), then loc can be referenced by P
under c. Otherwise, loc is not referenced by P under c.
3 We also can use conditional alias information to determine the
memory locations that may be modified by P under c. However,
this approach might be too expensive for large programs.
We performed several studies to evaluate the e#ective-
ness of using light-weight context recovery to improve
the precision and the e#ciency of program analyses. We
implemented ContextRecovery and the reuse-driven
slicing algorithm that uses information provided by
ContextRecovery using PROLANGS Analysis Frame-work
(PAF) [3]. In the studies, we compared the results
computed with alias information provided by Liang and
Harrold's algorithm (LH) [9] and by Andersen's algorithm
(AND) [1]. 4 We gathered the data for the studies
on a Sun Ultra30 workstation with 640MB physical
memory and 1GB virtual memory. 5 The left side of
Table
2 gives information about the subject programs.
CFG LH AND
program Nodes LOC CI CR CI CR
loader 819 1132 0.07 0.11 0.08 0.11
dixie 1357 2100 0.12 0.19 0.11 0.17
learn 1596 1600 0.11 0.2 0.11 0.17
assembler 1993 2510 0.26 0.35 0.23 0.34
smail
simulator 2992 3558 0.47 0.59 0.49 0.57
arc 3955 7325 0.38 0.77 0.38 0.68
space 5601 11474 1.48 1.62 1.86 1.91
larn 11796 9966 2.18 2.85 2.11 2.84
espresso 15351 12864 7.34 8.81 8.62 15.25
moria 20316 25002 29.29 38.98 22.49 24.79
twmc 22167 23922 2.98 4.69 3.53 7.96
Table
2: Information about subject programs (left) and
time in seconds for context-insensitive modification side effect
analysis (CI) and for ContextRecovery (CR) (right).
The goal of study 1 is to evaluate the e#ciency of
our algorithm (CR). We compared the time required
to run CR on a program and the time required to compute
modification side-e#ects of the procedures in the
program with a context-insensitive algorithm (CI). We
make this comparison because (1) the time for computing
modification side-e#ects is relatively small compared
to the time required for many program analyses and (2)
our algorithm can be used instead of CI to compute
more precise modification side-e#ects that are required
for many program analyses. The right side of Table 2
shows the results computed using alias information provided
by the LH algorithm and by the AND algorithm.
From the table, we can see that, for the subjects we
studied, CR is almost as e#cient as CI. This suggests
that the time added by our algorithm might be negligible
in many program analyses.
The goal of study 2 is to evaluate the precision of our
algorithm in identifying memory locations that are modified
by a procedure under a specific callsite (MOD at
4 See [9] for a detailed comparison of these two algorithms.
5 Because we simulate the e#ects of library functions using new
stubs with greater details, data reported in these studies for the
subject programs di#er from those reported in our previous work.
Figure
Average sizes of MOD at a callsite.
a callsite). We compared the size of MOD at a callsite
computed by the traditional context-insensitive modification
side-e#ect analysis algorithm (the CI-MOD al-
gorithm) and by our algorithm. The reduction of the
size of MOD at a callsite indicates the e#ectiveness of
our technique in filtering spurious information at a call-
site. We also compared the results computed by our
algorithm with the results computed by Landi, Ryder,
and Zhang's modification side e#ect analysis algorithm
(the LRZ algorithm) [8] that uses conditional analy-
sis. The results computed by the LRZ algorithm can
be viewed as a lower bound for our algorithm. We used
our implementations of the CI-MOD algorithm and of
our algorithm, and used the implementation of the LRZ
algorithm provided with PAF.
Figure
6 shows the results of this study. In the graph,
the total length of each bar indicated by either AND
or LH represents the average size of MOD at a callsite
computed by the CI-MOD algorithm using the alias information
provided by the AND algorithm or the LH al-
gorithm. On each bar, the length of the slanted segment
represents the average size of MOD at a callsite computed
by our algorithm using alias information provided
by the corresponding alias analysis algorithm. For ex-
ample, using the alias information provided by the AND
algorithm, the CI-MOD algorithm reports that a call-site
modifies 29 memory locations in space. Using the
same alias information, however, our algorithm reports
that a callsite modifies only 4.2 memory locations. The
graph shows that for most subject programs we stud-
ied, our algorithm computes significantly more precise
MOD at a callsite than the CI-MOD algorithm. Thus,
we expect that using information provided by our algorithm
can significantly reduce the spurious information
propagated across procedure boundaries.
Figure
6 also shows the average size of MOD at a call-site
computed by the LRZ algorithm. 6 In the graph,
the length of each bar indicated by LRZ represents the
average size of MOD at a callsite computed by the LRZ
algorithm. For example, the LRZ algorithm reports that
a callsite in space can modify 5 memory locations. Note
that because this algorithm uses alias information computed
by Landi and Ryder's algorithm [7], which treats
a structure in the same way as its fields in some cases,
this algorithm reports a larger MOD at a callsite than
our algorithm for space. The graph shows that, for several
programs we studied, the size of MOD at a callsite
computed by our algorithm is close to that computed
by the LRZ algorithm. This result suggests that our
algorithm can be quite precise in identifying memory
locations that may be modified by a procedure under a
specific callsite. The graph also shows that the precision
of MOD at a callsite computed by our algorithm varies
for di#erent programs. This suggests that the e#ective-
ness of improving program analyses using information
provided by our algorithm might depend on how the
program is written.
Study 3
The goal of study 3 is to evaluate the e#ectiveness of
using information provided by light-weight context recovery
in improving the precision and e#ciency of the
reuse-driven slicing algorithm. We compared the size of
a slice and the time to compute a slice with and without
using information provided by light-weight context
recovery. Table 3 shows the results.
The left side of Table 3 shows S, the average size of
a slice computed using information provided by light-weight
context recovery, and S # , the average size of a
slice computed without using such information. The table
also shows the ratio of S to S # in percentage. From
the table, we can see that, for some programs, using information
provided by light-weight context recovery can
significantly improve the precision of computing inter-procedural
slices. However, for other programs, we do
not see significant improvement. One explanation for
this may be that, on these programs, the precision of
the interprocedural slicing is not sensitive to the precision
of identifying memory locations that are modified
or referenced at a statement. This result is consistent
with results reported in References [9, 14], which show
that the precision of interprocedural slicing is not very
sensitive to the precision of alias information.
The right side of the Table 3 shows T , the average
time to compute a slice using information provided by
6 Data for some programs are unavailable: Landi and Ryder's
algorithm [7] fails to terminate within 10 hours (time limit we set)
when computing alias information for these programs.
average size time in seconds
name alias S S' S/S' T T' T/T'
load- LH 187 241 77.9% 2.3 6.4 35.0%
er+ AND 187 241 77.9% 2.3 6.4 35.2%
dixie+ LH 629 648 97.0% 10.0 23.0 43.6%
AND 609 648 94.0% 5.3 12.3 42.9%
learn+ LH 499 501 99.6% 16.2 29.7 54.6%
AND 479 501 95.6% 10.9 21.2 51.5%
AND 791 806 98.1% 8.3 9.7 85.7%
assem- LH 744 751 99.1% 15.2 93.8 16.2%
smail# LH 1066 1087 98.1% 158 545 29.0%
AND 1032 1090 94.7% 129 390 33.0%
AND 546 698 78.3% 10.3 39.5 26.2%
simu- LH 1174 1178 99.7% 11.1 18.5 59.8%
later+ AND 1174 1178 99.7% 10.8 18.7 58.0%
arc+ LH 788 804 98.1% 9.4 12.9 72.5%
AND 771 803 96.1% 7.6 9.2 82.2%
space# LH 2028 2161 93.9% 31.8 577 5.5%
AND 2019 2161 93.4% 31.2 574 5.4%
larn# LH 4590 4612 99.5% 642 977 65.7%
AND 4576 4603 99.4% 619 798 77.5%
average size time in hours
name alias S S' S/S' T T' T/T'
espre- LH 5704 5705 100% 1.2 4.7 25.4%
sso# AND 5704 5705 100% 6.9 7.3 93.7%
moria# LH 7820 28 >100
AND 7822 7.1 >100
twmc# LH 4331 4331 100% 1.9 2.2 83.7%
AND 4327 4327 100% 1.7 2.0 84.1%
+Data are collected from all slices of the program.
#Data are collected from one slice.
Table
3: Average size of a slice (left) and average time to
compute a slice (right).
light-weight context recovery, and T # , the average time
to compute a slice without using information provided
by light-weight context recovery. The time measured
does not include time required for building CFG, alias
analysis, computing modification side-e#ect, and context
recovery. The table also shows the ratio of T to
. From the table, we see that using information provided
by light-weight context recovery can significantly
reduce the time required to compute interprocedural
slices. This suggests that that our technique might e#ec-
tively improve the e#ciency of many program analyses.
Flow-insensitive alias analysis algorithms can be ex-
tended, using a similar technique as in Reference [4],
to compute polyvariant alias information that identifies
di#erent alias relations for a procedure under di#erent
callsites. Using polyvariant alias information, a program
analysis can identify the memory locations that
are accessed in a procedure under a specific callsite,
and thus, computes more accurate program informa-
tion. However, computing polyvariant alias information
may require a procedure to be analyzed multiple times,
each under a specific calling context. This requirement
may make the alias analysis ine#cient.
Observing that memory locations pointed to by the
same pointers in a procedure have the same program
information in the procedure, we developed a technique
[10] that partitions these memory locations into equivalence
classes. Memory locations in an equivalence class
share the same program information in a procedure.
Therefore, when the procedure is analyzed, only the information
for a representative of each equivalence class
is computed. This information is then reused for other
memory locations in the same equivalence class. Experiments
[10, 11] show that this technique can e#ec-
tively improve the performance of program analyses.
The technique presented in this paper improves the performance
of program analyses in another dimension, and
thus, can be used together with equivalence analysis to
further improve the e#ciency of program analyses.
Horwitz, Reps, and Binkley [6] present a technique that
uses the sets of variables that may be modified or may
be referenced by a procedure to avoid including unnecessary
callsites in a slice. This technique is needed so
that a system-dependence-graph based slicer can compute
slices that are as precise as those computed by
other interprocedural slicers (e.g., [5]). Our technique
di#ers from theirs in that our technique uses the sets of
memory locations that may be accessed by a procedure
under a specific callsite to filter spurious program infor-
mation. Thus, our technique can improve the precision
and performance of many program analyses on which
Horwitz, Reps, and Binkley's technique cannot apply.
There are many other techniques that can improve the
performance of program analyses (e.g., [13]). Our light-weight
context-recovery technique can be used with
many of these approaches to improve further the performance
of data-flow analyses.
6 CONCLUSION AND FUTURE WORK
We presented a light-weight context recovery algorithm,
and illustrated a technique that uses the information
provided by the light-weight context recovery to improve
the precision and the e#ciency of program anal-
yses. We also conducted several empirical studies. The
results of our studies suggest that, in many cases, using
light-weight context recovery can e#ectively improve the
precision and e#ciency of program analyses.
In our future work, first, we will repeat the studies in
this paper on larger programs to further validate our
conclusions. Second, we will perform studies to evaluate
the e#ectiveness of combining light-weight context
recovery with equivalence analysis to improve the e#-
ciency of computing interprocedural slices. Third, we
will apply our technique to other program analyses and
evaluate its e#ectiveness on those program analyses. Fi-
nally, we will perform studies to compare our technique
with conditional analysis.
ACKNOWLEDGMENTS
This work was supported by NSF under grants CCR-
9696157 and CCR-9707792 to Ohio State University.
--R
Program analysis and specialization for the C programming language.
Programming Languages Research Group.
Call graph construction in object-oriented languages
Interprocedural slicing using dependence graphs.
A safe approximate algorithm for interprocedural pointer aliasing.
Interprocedural modification side e
Equivalence analysis: A general technique to improve the e
Interprocedural def-use associations in C programs
Program slicing.
--TR
Interprocedural slicing using dependence graphs
A safe approximate algorithm for interprocedural aliasing
Interprocedural modification side effect analysis with pointer aliasing
Call graph construction in object-oriented languages
Effective whole-program analysis in the presence of pointers
Reuse-driven interprocedural slicing
Equivalence analysis
Efficient points-to analysis for whole-program analysis
Data-flow analysis of program fragments
Interprocedural Def-Use Associations for C Systems with Single Level Pointers
The Effects of the Presision of Pointer Analysis
Reuse-Driven Interprocedural Slicing in the Presence of Pointers and Recursions
--CTR
Donglin Liang , Maikel Pennings , Mary Jean Harrold, Evaluating the impact of context-sensitivity on Andersen's algorithm for Java programs, ACM SIGSOFT Software Engineering Notes, v.31 n.1, January 2006
Anatoliy Doroshenko , Ruslan Shevchenko, A Rewriting Framework for Rule-Based Programming Dynamic Applications, Fundamenta Informaticae, v.72 n.1-3, p.95-108, January 2006
Markus Mock , Darren C. Atkinson , Craig Chambers , Susan J. Eggers, Improving program slicing with dynamic points-to data, Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering, November 18-22, 2002, Charleston, South Carolina, USA
Markus Mock , Darren C. Atkinson , Craig Chambers , Susan J. Eggers, Improving program slicing with dynamic points-to data, ACM SIGSOFT Software Engineering Notes, v.27 n.6, November 2002
Markus Mock , Darren C. Atkinson , Craig Chambers , Susan J. Eggers, Program Slicing with Dynamic Points-To Sets, IEEE Transactions on Software Engineering, v.31 n.8, p.657-678, August 2005
Michael Hind, Pointer analysis: haven't we solved this problem yet?, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.54-61, June 2001, Snowbird, Utah, United States
Baowen Xu , Ju Qian , Xiaofang Zhang , Zhongqiang Wu , Lin Chen, A brief survey of program slicing, ACM SIGSOFT Software Engineering Notes, v.30 n.2, March 2005 | program analysis;aliasing;slicing |
337257 | Learning functions represented as multiplicity automata. | We study the learnability of multiplicity automata in Angluin's exact learning model, and we investigate its applications. Our starting point is a known theorem from automata theory relating the number of states in a minimal multiplicity automaton for a function to the rank of its Hankel matrix. With this theorem in hand, we present a new simple algorithm for learning multiplicity automata with improved time and query complexity, and we prove the learnability of various concept classes. These include (among others): -The class of disjoint DNF, and more generally -The class of polynomials over finite fields. -The class of bounded-degree polynomials over infinite fields. -The class of XOR of terms. -Certain classes of boxes in high dimensions. In addition, we obtain the best query complexity for several classes known to be learnable by other methods such as decision trees and polynomials over GF(2). While multiplicity automata are shown to be useful to prove the learnability of some subclasses of DNF formulae and various other classes, we study the limitations of this method. We prove that this method cannot be used to resolve the learnability of some other open problems such as the learnability of general DNF formulas or even k-term DNF for These results are proven by exhibiting functions in the above classes that require multiplicity automata with super-polynomial number of states. | Introduction
The exact learning model was introduced by Angluin [5] and since then attracted a lot of
attention. In particular, the following classes were shown to be learnable in this model:
deterministic automata [4], various types of DNF formulae [1, 2, 3, 6, 16, 17, 18, 20, 29,
33] and multi-linear polynomials over GF(2) [44]. Learnability in this model also implies
learnability in the "PAC" model with membership queries [46, 5].
One of the classes that was shown to be learnable in this model is the class of multiplicity
automata [12] 1 and [40]. Multiplicity automata are essentially nondeterministic automata
with weights from a field K on the edges. Such an automaton computes a function as follows:
For every path in the automaton assign a weight which is the product of the weights on the
edges of this path. The function computed by the automaton is essentially the sum of the
weights of the paths consistent with the input string (this sum is a value in K). 2 Multiplicity
automata are a generalization of deterministic automata, and the algorithms that learn this
class [12, 13, 40] are generalizations of Angluin's algorithm for deterministic automata [4].
We use an algebraic approach for learning multiplicity automata, similar to [40]. This
approach is based on a fundamental theorem in the theory of multiplicity automata. The
theorem relates the size of a smallest automaton for a function f to the rank (over K) of
the so-called Hankel matrix of f [22, 26] (see also [25, 15] for background on multiplicity
1 A full description of this work appears in [13]
These automata are known in the literature under various names. In this paper we refer to them as
multiplicity automata. The functions computed by these automata are usually referred to as recognizable
series.
automata). Using this theorem, and ideas from the algorithm of [42] (for learning deterministic
automata), we develop a new algorithm for learning multiplicity automata which
is more efficient than the algorithms of [12, 13, 40]. In particular we give a more refined
analysis for the complexity of our algorithm when learning functions f with finite domain.
A different algorithm with similar complexity to ours was found by [21]. 3
In this work we show that the learnability of multiplicity automata implies the learnability
of many other important classes of functions. 4 First, it is shown that the learnability of
multiplicity automata implies the learnability of the class of Satisfy-s DNF formulae, for
in which each assignment satisfies at most s terms). This
class includes as a special case the class of disjoint DNF which by itself includes the class
of decision trees. These results improve over previous results of [18, 2, 16]. More generally,
we consider boxes over a discrete domain of points (i.e., Such boxes were
considered in many works (e.g., [37, 38, 23, 7, 27, 31, 39]). We prove the learnability of any
union of O(log n) boxes in time poly(n; '), and the learnability of any union of t disjoint boxes
(and more generality, any t boxes such that each point is contained in at most
of them) in time poly(n; t; '). 5 The special case of these results where implies the
learnability of the corresponding classes of DNF formulae.
We further show the learnability of the class of XOR of terms, which is an open problem
in [44], the class of polynomials over finite fields, which is an open problem in [44, 19], and the
class of bounded-degree polynomials over infinite fields (as well as other classes of functions
over finite and infinite fields). We also prove the learnability of a certain class of decision
trees whose learnability is an open problem in [18].
While multiplicity automata are proved to be useful to solve many open problems regarding
the learnability of DNF formulae and other classes of polynomials and decision trees,
we study the limitations of this method. We prove that this method cannot be used to
resolve the learnability of some other open problems such as the learnability of general DNF
formulae or even k-term DNF for
(these results are tight in the sense that O(log n)-term DNF formulae and satisfy-O(1) DNF
are learnable using multiplicity automata). These impossibility results are proven
by exhibiting functions in the above classes that require multiplicity automata with super-polynomial
number of states. For proving these results we use, again, the relation between
multiplicity automata and Hankel matrices.
3 In fact, [21] show that the algorithm can be generalized to K which is not necessarily a field but rather
a certain type of ring.
In [33] it is shown how the learnability of deterministic automata can be used to learn certain (much
more restricted) classes of functions.
5 In [9], using additional machinery, the dependency on ' was improved.
Organization: In Section 2 we present some background on multiplicity automata, as
well as the definition of the learning model. In Section 3 we present a learning algorithm
for multiplicity automata. In Section 4 we present applications of the algorithm for learning
various classes of functions. Finally, in Section 5, we study the limitations of this method.
Background
2.1 Multiplicity Automata
In this section we present some definitions and a basic result concerning multiplicity au-
tomata. Let K be a field, \Sigma be an alphabet, and f : \Sigma ! K be a function. Associate with f
an infinite matrix F each of its rows is indexed by a string x 2 \Sigma and each of its columns is
indexed by a string y 2 \Sigma . The (x; y) entry of F contains the value f(xffiy), where ffi denotes
concatenation. (In the automata literature such a function f is often referred to as a formal
series and F as its Hankel Matrix.) We use F x to denote the x-th row of F . The (x; y) entry
of F may be therefore denoted as F x (y) and as F x;y . The same notation is adapted to other
matrices used in the sequel.
Next we define the automaton representation (over the field K) of functions. An automaton
A of size r consists of j\Sigmaj matrices f- oe : oe 2 \Sigmag each of which is an r \Theta r matrix of
elements from K and an r-tuple . The automaton A defines a function
First, we associate with every string in \Sigma an r \Theta r matrix over K by
defining -(ffl) 4
ID, where ID denotes the identity matrix 6 , and for a string
let -(w)
(a simple but useful property of - is that
denotes the first row of the matrix -(w)). In words,
A is an automaton with r states where the transition from state q i to state q j with letter
oe has weight [- oe ] i;j . The weight of a path whose last state is q ' is the product of weights
along the path multiplied by fl ' , and the function computed on a string w is just the sum of
weights over all paths corresponding to w.
The following is a fundamental theorem from the theory of formal series. It relates the
size of the minimal automaton for f to the rank of F [22, 26].
Theorem 2.1 Let f : \Sigma ! K and let F be the corresponding Hankel matrix. Then, the
size r of the smallest automaton A such that fA j f satisfies (over the field K).
Although this theorem is very basic, we provide its proof here as it sheds light on the
way the algorithm of Section 3 works.
6 That is, a matrix with 1's on the main diagonal and 0's elsewhere.
Direction I: Given an automaton A for f of size r, we prove that rank(F ) - r. Define
two matrices: R whose rows are indexed by \Sigma and its columns are indexed by
C whose columns are indexed by \Sigma and its rows are indexed by r. The (x; i) entry
of R contains the value [-(x)] 1;i and the (i; y) entry of C contains the value [-(y)] i \Delta ~fl . We
show that This follows from the following sequence of simple equalities:
r
where C y denotes the y-th column of C. Obviously the rank of both R and C is bounded by
r. By linear algebra, rank(F ) is at most minfrank(R); rank(C)g and therefore rank(F
as needed.
Direction II: Given a function f such that the corresponding matrix F has rank r, we show
how to construct an automaton A of size r that computes this function. Let F x
be r independent rows of F (i.e., a basis) corresponding to strings x To
define A, we first define Next, for every oe, define the i-th row of the
matrix - oe as the (unique) coefficients of the row F x i ffioe when expressed as a linear combination
of F x 1
That is,
r
We will prove, by induction on jwj (the length of the string w), that [-(w)] i
for all i. It follows that fA (as we choose x
The induction base is ffl). In this case we have
needed. For the induction step using Equation (1) we
r
then by induction hypothesis this equals
r
as needed.
2.2 The Learning Model
The learning model we use is the exact learning model [5]: Let f be a target function.
A learning algorithm may propose, in each step, a hypothesis function h by making an
equivalence query (EQ) to an oracle. If h is logically equivalent to f then the answer to
the query is YES and the learning algorithm succeeds and halts. Otherwise, the answer to
the equivalence query is NO and the algorithm receives a counterexample - an assignment
z such that f(z) 6= h(z). The learning algorithm may also query an oracle for the value
of the function f on a particular assignment z by making a membership query (MQ) on z.
The response to such a query is the value f(z). 7 We say that the learner learns a class of
functions C, if for every function f 2 C the learner outputs a hypothesis h that is logically
equivalent to f and does so in time polynomial in the "size" of a shortest representation of
f .
3 The Algorithm
In this section we describe an exact learning algorithm for multiplicity automata. The
"size" parameter in the case of multiplicity automata is the number of states in a minimal
automaton for f . The algorithm will be efficient in this number and the length of the longest
counterexample provided to it.
K be the target function. All algebraic operations in the algorithm are done
in the field K. 8 The algorithm learns a function f using its Hankel matrix representation,
F . The difficulty is that F is infinite (and is very large even when restricting the inputs to
some length n). However, Theorem 2.1 (Direction II) implies that it is sufficient to maintain
independent rows from F ; in fact, r \Theta r submatrix of F of full rank
suffices. Therefore, the learning algorithm can be viewed as a search for appropriate r rows
and r columns.
The algorithm works in iterations. At the beginning of the '-th iteration, the algorithm
holds a set of rows X ae \Sigma and a set of columns Y ae \Sigma
fy
F z denote the restriction of the row F z to the ' coordinates in Y , i.e.
F z
Note that given z and Y the vector b
F z is computed using
queries. It will hold that b
are ' linearly independent vectors.
Using these vectors the algorithm constructs a hypothesis h, in a manner similar to the proof
of Direction II of Theorem 2.1, and asks an equivalence query. A counterexample to h leads
to adding a new element to each of X and Y in a way that preserves the above properties.
7 If f is boolean this is the standard membership query.
8 We assume that every arithmetic operation in the field takes one time unit.
This immediately implies that the number of iterations is bounded by r. We assume without
loss of generality that f(ffl) 6= 0. 9 The algorithm works as follows:
1. X / fx
2. Define a hypothesis h (following Direction II of Theorem 2.1):
)). For every oe, define a matrix b
- oe by letting its i-th row
be the coefficients of the vector b
expressed as a linear combination of the
vectors b
(such coefficients exist as b
are ' independent '-tuples).
That is, b
For define an ' \Theta ' matrix b
-(w) as follows: Let b
ID and for a string
. Finally, h is defined as
3. Ask an equivalence query EQ(h).
If the answer is YES halt with output h.
Otherwise the answer is NO and z is a counterexample.
Find (using MQs for f) a string wffioe which is a prefix of z such that:
(a) b
but
(b) there exists y 2 Y such that
Fwffioe (y) 6=
GO TO 2.
The following two claims are used in the proof of correctness. They show that in every
iteration of the algorithm, a prefix as required in Step 3 is found, and that as a result the
number of independent rows that we have grows by 1.
3.1 Let z be a counterexample to h found in Step 3 (i.e., f(z) 6= h(z)). Then, there
exists a prefix wffioe satisfying (a) and (b).
Proof: Assume towards a contradiction that no prefix satisfies both (a) and (b). We prove
(by induction on the length) that, for every prefix w of z, Condition (a) is satisfied. That is,
9 To check the value of f(ffl) we ask a membership query. If then we learn f 0 which is identical
to f except that at ffl it gets some value different than 0. Note that the matrix F 0 is identical to F in all
entries except one and so the rank of F 0 differs from the rank of F by at most 1. The only change this
makes on the algorithm is that before asking EQ we modify the hypothesis h so that its value in ffl will be
Alternatively, we can find a string z such that f(z) 6= 0 (by asking EQ(0)) and start the algorithm with
which gives a 2 \Theta 2 matrix of full rank.
By the proof of Theorem 2.1, it follows that if . However, we do not need this fact for
analyzing the algorithm, and the algorithm does not know r in advance.
. The induction base is trivial since b
ffl). For the
induction step consider a prefix wffioe. By the induction hypothesis, b
which implies (by the assumption that no prefix satisfies both (a) and (b)) that (b) is not
satisfied with respect to the prefix wffioe. That is, b
. By the
definition of b
- and by the definition of matrix multiplication
All together, b
which completes the proof of the induction.
Now, by the induction claim, we get that b
In particular, b
F z
However, the left-hand side of this equality is just f(z)
while the right-hand side is h(z). Thus, we get which is a contradiction (since
z is a counterexample).
3.2 Whenever Step 2 starts the vectors b
(defined by the current X and
Y ) are linearly independent.
Proof: The proof is by induction. In the first time that Step 2 starts ffflg. By
the assumption that f(ffl) 6= 0, we have a single vector b
F ffl which is not a zero vector, hence
the claim holds.
For the induction, assume that the claim holds when Step 2 starts and show that it also
holds when Step 3 ends (note that in Step 3 a new vector b
Fw is added and that all vectors have
a new coordinate corresponding to oe ffiy). By the induction hypothesis, when Step 2 starts,
F x ' are ' linearly independent '-tuples. In particular this implies that when Step 2
starts b
Fw has a unique representation as a linear combination of b
. Since w satisfies
(a) this linear combination is given by b
remain linearly independent (with respect to the new Y ). However, at this time,
Fw becomes linearly independent of b
(with respect to the new Y ). Otherwise,
the linear combination must be given by b
However, as wffioe satisfies
(b) we get that b
Fw (oe
Fwffioe (y) 6=
(oe ffiy) which
eliminates this linear combination. (Note that oe ffiy was added to Y so b
F is defined in all
the coordinates which we refer to.) To conclude, when Step 3 ends b
F x '+1 =w are
linearly independent.
We summarize the analysis of the algorithm by the following theorem. Let m denote the
size of the longest counterexample z obtained during the execution of the algorithm. Denote
by the complexity of multiplying two r \Theta r matrices.
Theorem 3.3 Let K be a field, and f : \Sigma ! K be a function such that
K). Then, f is learnable by the above algorithm in time O(j\Sigmaj using r
equivalence queries and O((j\Sigmaj queries.
Proof: Claim 3.1 guarantees that the algorithm always proceeds. Since the algorithm
halts only if EQ(h) returns YES the correctness follows.
As for the complexity, Claim 3.2 implies that the number of iterations, and therefore the
number of equivalence queries, is at most r (in fact, Theorem 2.1 implies that the number
of iterations is exactly r).
The number of MQs asked in Step 2 over the whole algorithm is (j\Sigmaj since for
every x 2 X and y 2 Y we need to ask for the value of f(xy) and the values f(xoey), for
all oe 2 \Sigma. To analyze the number of MQs asked in Step 3, we first need to specify the
way that the appropriate prefix is found. The naive way is to go over all prefixes of z until
finding one satisfying (a) and (b). A more efficient search can be based upon the following
generalization of Claim 3.1: suppose that for some v, a prefix of z, Condition (a) holds.
That is, b
F x i . Then, there exists wffioe a prefix of z that extends v and
satisfies (a) and (b) (the proof is identical to the proof of Claim 3.1 except that for the base
of induction we use v instead of ffl). Using the generalized claim, the desired prefix wffioe can
be found using a binary search in log jzj - log m steps as follows: at the middle prefix v
check whether (a) holds. If so make v the left border for the search. If (a) does not hold for
then by Equation (2) condition (b) holds for v and so v becomes the right border
for the search. In each step of the binary search 2' - 2r membership queries are asked (note
that the values of b
are known from Step 2). All together the number of MQs
asked during the execution of the algorithm is O((log m+ j\Sigmaj)r 2 ).
As for the running time, to compute each of the matrices b
- oe observe that the matrix whose
rows are b
F x ' ffioe is the product of b
- oe with the matrix whose rows are b
Therefore, finding b
- oe can be done with one matrix inversion (whose cost is also O(M(r)))
and one matrix multiplication. Hence the complexity of Step 2 is O(j\Sigmaj \Delta M(r)). In Step 3
the difficult part is to compute the value of b
-(w) for prefixes of z. A simple way to do
it is by computing m matrix multiplications for each such z. A better way of doing the
computation of Step 3 is by observing that all we need to compute is actually the first row
of the matrix b
-wm . The first row of this matrix can simply be written as
-(w). Thus, to compute this row, we first compute (1;
then multiply the result by b
and so on. Therefore, this computation can be done by m
vector-matrix multiplications, which requires O(m \Delta r 2 ) time. All together, the running time
is at most O(j\Sigmaj
The complexity of our algorithm should be compared to the complexity of the algorithm
of [12, 13] which uses r equivalence queries, O(j\Sigmajmr 2 ) membership queries, and runs in time
The algorithm of [40] uses r+1 equivalence queries, O((j\Sigmaj +m)r 2 ) membership
queries, and runs in time O((j\Sigmaj +m)r 4 ).
3.1 The Case of Functions
In many cases of interest the domain of the target function f is not \Sigma but rather \Sigma n for
some value n. We view f as a function on \Sigma whose value is 0 for all strings whose length is
different than n. We show that in this case the complexity analysis of our algorithm can be
further improved. The reason is that in this case the matrix F has a simpler structure. Each
row and column is indexed by a string whose length is at most n (alternatively, rows and
columns corresponding to longer strings contain only 0 entries). Moreover, for any string x
of length 0 - d - n the only non-zero entries in the row F x correspond to y's of length
Denote by F d the submatrix of F whose rows are strings in \Sigma d and its columns are strings
in \Sigma n\Gammad (see Fig. 1). Observe that by the structure of F ,
Now, to learn such a function f we use the above algorithm but ask membership queries
only on strings of length exactly n (for all other strings we return 0 without actually asking
the query) and for the equivalence queries we view the hypothesis h as restricted to \Sigma n . The
length of counterexamples, in this case, is always n and so
Looking closely at what the algorithm does it follows that since b
F is a submatrix of F , not
only b
F x ' are always independent vectors (and so ' - rank(F )) but that for every d,
the number of x i 's in X whose length is d is bounded by rank(F d ). We denote r d
and r max
d=0 r d . The number of equivalence queries remains r as before. The number
of membership queries however becomes smaller due to the fact that many entries of F are
known to be 0. In Step 2, over the whole execution, we ask for every x 2 X of length d
and every y 2 Y of length one MQ on f(xy) and for every y 2 Y of length
and every oe 2 \Sigma we ask MQ on f(xoey). All together, in Step 2 the algorithm asks for
every x at most r queries and total of O(r \Delta r max j\Sigmaj) membership
queries. In Step 3, in each of the r iterations and each of the log n search steps we ask
at most 2r max membership queries (again, because most of the entries in each row contain
0's). All together O(rr max log n) membership queries in Step 3 and over the whole algorithm
O(r log n)).
F d
F
Figure
1: The Hankel matrix F
As for the running time, note that the matrices b
- oe also have a very special structure:
the only entries (i; j) which are not 0 are those corresponding to vectors x
that jx multiplication of such matrices can be done in
Therefore, each invocation of Step 2 requires time of O(j\Sigmajn \Delta M(r max )).
Similarly, in [ b
-(w)] 1 the only entries which are not 0 are those corresponding to strings
by a column of b
units. Furthermore, we need to multiply only for at most r max columns, for the non-zero
coordinates in [ b
Therefore, Step 3 takes at most nr 2
for each counterexample z.
All together, the running time is at most O(nrr 2
Corollary 3.4 Let K be a field, and f : \Sigma n ! K such that
d=0 rank(F d ) (where rank is taken over K). Then, f is learnable by the above algorithm in
using O(r) equivalence queries and O((j\Sigmaj+log n)r \Delta r
queries.
4 Positive Results
In this section we show the learnability of various classes of functions by our algorithm.
This is done by proving that for every function f in the class in question, the corresponding
Hankel matrix F has low rank. By Theorem 3.3 this implies the learnability of the class by
our algorithm.
First, we observe that it is possible to associate a multiplicity automaton with every non-deterministic
automaton, such that on every string w the multiplicity automaton "counts"
the number of accepting paths of the nondeterministic automaton on w. To see this, define
the (i; j) entry of the matrix - oe as 1 if the given automaton can move, on letter oe, from
state i to state j (otherwise, this entry is 0). In addition, define fl i to be 1 if i is an accepting
state and 0 otherwise. Thus, if the automaton is deterministic or unambiguous 11 then the
associated multiplicity automaton defines the characteristic function of the language. By
[33] the class of deterministic automata contains the class of O(log n)-term DNF and in fact
the class of all boolean functions over O(log n)-terms. Hence, all these classes can be learned
by our algorithm. We note that if general nondeterministic automata can be learned then
this implies the learnability of DNF.
4.1 Classes of Polynomials
Our first results use the learnability of multiplicity automata to learn various classes of
multivariate polynomials. We start with the following claim:
Theorem 4.1 Let p i;j (z arbitrary functions of a single variable (1
K be defined by
Finally, let f : \Sigma n ! K be defined
by
F be the Hankel matrix corresponding to f , and F d the sub-matrices
defined in Section 3.1. Then, for every 0 - d - n, t.
Proof: Recall the definition of F d . Every string z 2 \Sigma n is viewed as partitioned into two
substrings Every row of F d is indexed by
hence it can be written as a function
F d
x
Y
Y
Now, for every x and i, the term
just a constant ff i;x 2 K. This means, that
every function F d
x
(y) is a linear combination of the t functions
for each value of i). This implies that rank(F d ) - t, as needed.
Corollary 4.2 The class of functions that can be expressed as functions over GF(p) with t
summands, where each summand T i is a product of the form p i;1
are arbitrary functions) is learnable in time poly(n; t; p).
11 A nondeterministic automata is unambiguous if for every w 2 \Sigma there is at most one accepting path.
The above corollary implies as a special case the learnability of polynomials over GF(p).
This extends the result of [44] from multi-linear polynomials to arbitrary polynomials. Our
algorithm (see Corollary 3.4), for polynomials with n variables and t terms, uses O(nt)
equivalence queries and O(t 2 n log n) membership queries. The special case of the above class
- the class of multi-linear polynomials over GF(2) - was known to be learnable before [44].
Their algorithm uses O(nt) equivalence queries and O(t 3 n) membership queries (which is
worse than ours for "most" values of t).
Corollary 4.2 discusses the learnability of a certain class of functions (that includes the
class of polynomials) over finite fields (the complexity of the algorithm depends on the size
of the field). The following theorem extends this result to infinite fields, assuming that the
functions p i;j are bounded-degree polynomials. It also improves the complexity for learning
polynomials over finite fields, when the degree of the polynomials is significantly smaller
than the size of the field.
Theorem 4.3 The class of functions over a field K that can be expressed as t summands,
where each summand T i is of the form p i;1 are polynomials
of degree at most k, is learnable in time poly(n; t; k). Furthermore, if jKj - nk
class is learnable from membership queries only in time poly(n; t; (with small probability
of error).
Proof: We show that although the field K may be very large, we can run the algorithm
using an alphabet of k elements from the field, g. For this, all we
need to show is how the queries are asked and answered. The membership queries are asked
by the algorithm, so it will only present queries which are taken from the domain \Sigma n . For
the equivalence queries we do the following: instead of representing the hypothesis with j\Sigmaj
matrices b
-(oe k+1 ) we will represent it with a single matrix H(x) each of its entries
is a degree k polynomial (over K), such that for every oe 2 \Sigma,
-(oe). (To find this
use interpolation in each of its entries. Also, in this terminology, for
the hypothesis is it is easy to see that both the target
function and the hypothesis are degree-k polynomials in each of the n variables. Therefore,
given a counterexample w 2 K n , we can modify it to be in \Sigma n as follows: in the i-th step
fix z doing so, both the hypothesis and the target function become
degree k polynomials in the variable z i . Hence, there exists oe 2 \Sigma, for which these two
polynomials disagree. We set w We end up with a new counterexample w 2 \Sigma n , as
desired.
Assume that K contains at least nk be a
subset of K. By Schwartz Lemma [45], two different polynomials in z
(in each variable) can agree on at most knjLj n\Gamma1 assignments in L n . Therefore, by picking
at random poly(n; random elements in L n we can obtain, with very high probability, a
counterexample to our hypothesis (if such a counterexample exists). We then proceed as
before (i.e., modify the counterexample to the domain \Sigma n etc.)
An algorithm which learns multivariate polynomials using only membership queries is
called an interpolation algorithm (e.g. [10, 28, 47, 24, 43, 30]; for more background and
references see [48]). In [10] it is shown how to interpolate polynomials over infinite fields
using only 2t membership queries. In [47] it is shown how to interpolate polynomials over
finite fields
elements. If the number of elements in the field is less than
k then every efficient algorithm must use equivalence queries [24, 43]. In Theorem 4.3 the
polynomials we interpolate have a more general form than in standard interpolation and we
only require that the number of elements in the field is at least kn + 1.
4.2 Classes of Boxes
In this section we consider unions of n-dimensional boxes in ['] n (where ['] denotes the set
Formally, a box in ['] n is defined by two corners (a
(in
We view such a box as a boolean function that gives 1 for every point in ['] n which is inside
the box and 0 to each point outside the box. We start with a more general claim.
Theorem 4.4 Let p i;j (z arbitrary functions of a single variable (1
be defined by
Assume that there is no point
which satisfies more than s functions g i . Finally, let f : \Sigma n ! f0; 1g be defined by
F be the Hankel matrix corresponding to f . Then, for every field K and for
every
Proof: The function f can be expressed as:
Y
jSj=t
jSj=s
where the last equality is by the assumption that no point satisfies more than s functions.
Note that, every function of the form
i2S g i is a product of at most n functions, each one
is a function of a single variable. Therefore, applying Theorem 4.1 complete the proof.
Corollary 4.5 The class of unions of disjoint boxes can be learned in time poly(n; t; ') (where
t is the number of boxes in the target function). The class of unions of O(log n) boxes can
be learned in time poly(n; ').
Proof: Let B be any box and denote the two corners of B by (a
functions (of a single 1g to be 1 if a j - z
be defined by
belongs to the box B. Therefore, Corollary 3.4 and Theorem 4.4 imply this corollary.
4.3 Classes of DNF formulae
In this section we present several results for classes of DNF formulae and some related classes.
We first consider the following special case of Corollary 4.2 that solves an open problem of
[44]:
Corollary 4.6 The class of functions that can be expressed as exclusive-OR of t (not necessarily
monotone) monomials is learnable in time poly(n; t).
While Corollary 4.6 does not refer to a subclass of DNF, it already implies the learnability
of Disjoint (i.e., Satisfy-1) DNF. Also, since DNF is a special case of union of boxes (with
2), we can get the learnability of disjoint DNF from Corollary 4.5. Next we discuss positive
results for Satisfy-s DNF with larger values of s. The following two important corollaries
follow from Theorem 4.4. Note that Theorem 4.4 holds in any field. For convenience (and
efficiency), we will use
Corollary 4.7 The class of Satisfy-s DNF formulae, for
Corollary 4.8 The class of Satisfy-s, t-term DNF formulae is learnable for the following
choices of s and t: (1) log log n); (3)
log log n ) and
4.4 Classes of Decision Trees
As mentioned above, our algorithm efficiently learns the class of Disjoint DNF formulae.
This in particular includes the class of Decision-trees. By using our algorithm, decision
trees of size t on n variables are learnable using O(tn) equivalence queries and O(t 2 n log n)
membership queries. This is better than the best known algorithm for decision trees [18]
(which uses O(t 2 ) equivalence queries and O(t 2 In what follows we
consider more general classes of decision trees.
Corollary 4.9 Consider the class of decision trees that compute functions f
GF(p) as follows: each node v contains a query of the form "x i 2 S v ?", for some S v ' GF(p).
then the computation proceeds to the left child of v and if x
the computation
proceeds to the right child. Each leaf ' of the tree is marked by a value
is the output on all assignments which reach this leaf. Then, this class is learnable in time
poly(n; jLj; p), where L is the set of all leaves.
Proof: Each such tree can be written as
' is a function
whose value is 1 if the assignment reaches the leaf ' and 0 otherwise (note that
in a decision tree each assignment reaches a single leaf). Consider a specific leaf '. The
assignments that reach ' can be expressed by n sets S ';n such that the assignment
reaches the leaf ' if and only if x j 2 S ';j for all j. Define p ';j to be 1 if
. By Corollary 4.2 the result follows.
The above result implies as a special case the learnability of decision trees with "greater-
than" queries in the nodes. This is an open problem of [18]. Note that every decision tree
with "greater-than" queries that computes a boolean function can be expressed as the union
of disjoint boxes. Hence, this case can also be derived from Corollary 4.5.
The next theorem will be used to learn more classes of decision trees.
Theorem 4.10 Let
defined
F be the Hankel matrix corresponding to f , and G i be the Hankel
matrix corresponding to g i . Then, rank(F d
Proof: For two matrices A and B of the same dimension, the Hadamard product
A fi B is defined by C . It is well known that rank(C) - rank(A) \Delta rank(B).
Note that F
hence the theorem follows.
This theorem has some interesting applications, such as:
Corollary 4.11 Let C be the class of functions that can be expressed in the following way:
arbitrary functions of a single variable (1
be defined by \Sigma n
Finally,
be defined by
learnable in time poly(n; j\Sigmaj).
Corollary 4.12 Consider the class of decision trees of depth s, where the query at each node
v is a boolean function f v with r (as defined in Section 3.1) such that (t+1)
Then, this class is learnable in time poly(n; j\Sigmaj).
Proof: For each leaf ' we write a function g ' as a product of s functions as follows: for
each node v along the path to ' if we use the edge labeled 1 we take f v to the product
while if we use the edge labeled 0 we take to the product (note that the value r max
corresponding to (1 \Gamma f v ) is at most t 1). By Theorem 4.10, if G ' is the Hankel matrix
corresponding to g ' then rank(G d
' ) is at most (t+1) s . As
it follows that rank(F d )
is at most 2 s (this is because jLj - 2 s and rank(B)). The
corollary follows.
The above class contain for example all the decision trees of depth O(log n) that contain
in each node a term or XOR of a subset of variables (as defined in [34]).
5 Negative Results
The purpose of this section is to study some limitation of the learnability via the automaton
representation. We show that our algorithm, as well as any algorithm whose complexity is
polynomial in the size of the automaton (such as the algorithms in [12, 13, 40]), does not
efficiently learn several important classes of functions. More precisely, we show that these
classes contain functions f that have no "small" automaton. By Theorem 2.1, it is enough
to prove that the rank of the corresponding Hankel matrix F is "large" over every field K.
We define a function f exists
such that z 1. The function f n;k can be expressed as a DNF formula by:
Note that this formula is read-once, monotone and has k terms.
First, observe that the rank of the Hankel matrix corresponding to f n;k equals the rank
of F , the Hankel matrix corresponding to f 2k;k . It is also clear that rank(F ) - rank(F k ).
We now prove that rank(F k 1. To do so, we consider the complement matrix D k
(obtained from F k by switching 0's and 1's), and prove by induction on k that rank(D k
Note that
This implies that rank(D 1
It follows that rank(F k (where J is the all-1 matrix). 12
Using the functions f n;k we can now prove the main theorem of this section:
In fact, the function f 0
z n\Gammak+1 has similar properties to f n;k and can be shown
to have
rank\Omega\Gamman k \Delta n) hence slightly improving the results below.
Theorem 5.1 The following classes are not learnable as multiplicity automata (over any
field K):
1. DNF.
2. Monotone DNF.
3. 2-DNF.
4. Read-once DNF.
5. k-term DNF, for
6. Satisfy-s DNF, for
7. Read-j Satisfy-s DNF, for
n).
Some of these classes are known to be learnable by other methods (monotone DNF
[5], read-once DNF [6, 1, 41] and 2-DNF [46]), some are natural generalizations of classes
known to be learnable as automata (log n-term DNF [17, 18, 20, 33], and Satisfy-s DNF for
or by other methods (Read-j Satisfy-s for log log n) [16]), and
the learnability of some of the others is still an open problem.
Proof: Observe that f n;n=2 belongs to each of the classes DNF, Monotone DNF, 2-DNF,
Read-once DNF and that by the above argument every automaton for it has size 2 n=2 . This
shows
For every n), the function f n;k has exactly k-terms and every automaton for it
has size 2 n) which is super-polynomial. This proves 5.
For consider the function f n;s log n . Every automaton for it has size 2 s log
which is super-polynomial. We now show that the function f n;s log n has a small Satisfy-
s DNF representation. For this, partition the indices sets of log n
indices. For each set S there is a formula on 2 log n variables which is 1 iff there exists i 2 S
such that z 1. Moreover, there is such a formula which is Satisfy-1 (i.e., disjoint)
DNF, and it has n 2 terms (this is the standard DNF representation). The disjunction of
these s formulas gives a Satisfy-s DNF with sn 2 terms. This proves 6.
Finally, for
As before, the function
f n;k requires an automaton of super-polynomial size. On the other hand, by partitioning
the variables into s sets of log j variables as above (and observe that in the standard DNF
representation each variable appears 2 log this function is a Read-j Satisfy-s
DNF. This proves 7.
In what follows we wish to strengthen the previous negative results. The motivation is
that in the context of automata there is a fixed order on the characters of the string. However,
in general (and in particular for functions over \Sigma n ) there is no such "natural" order. Indeed,
there are important functions such as Disjoint DNF which are learnable as automata using
any order of the variables. On the other hand, there are functions for which certain orders
are much better than others. For example, the function f n;k requires automaton of size
exponential in k when the standard order is considered, but if instead we read the variables
in the order there is a small (even deterministic) automaton
for it (of size O(n)). As an additional example, every read-once formula has a "good" order
(the order of leaves in a tree representing the formula).
Our goal is to show that even if we had an oracle that could give us a "good" (not
necessarily the best) order of the variables (or if we could somehow learn such an order) then
still some of the above classes cannot be learned as automata. This is shown by exhibiting
a function that has no "small" automaton in every order of the variables. To show this,
we define a function n) as follows. Denote the input
variables for g n;k as w k. The function g n;k outputs
1 iff there exists t such that w
Intuitively, g n;k is similar to f n;k but instead of comparing the first k variables to the next k
variables we first "shift" the first k variables by t. 13
First, we show how to express g n;k as a DNF formula. For a fixed t, define a function
to be 1 iff ( ) holds. Observe that g n 0 ;k;t is isomorphic to f n 0 ;k and
so it is representable by a DNF formula (with k terms of size 2). Now, we write g
Therefore, g n;k can be written as a monotone, read-k, DNF of k 2 terms
each of size 3.
We now show that, for every order - on the variables, the rank of the matrix corresponding
to g n;k is large. For this, it is sufficient to prove that for some value t the rank of the matrix
corresponding to g n 0 ;k;t is large, since this is a submatrix of the matrix corresponding to g n;k
(to see this fix w As before, it is sufficient to prove that for
some t the rank of g 2k;k;t is large. The main technical issue is to choose the value of t. For this,
look at the order that - induces on z (ignoring w Look at the first
k indices in this order and assume, without loss of generality, that at least half of them are
from (hence out of the last k indices at least half are from
13 The rank method used to prove that every automaton for f n;k is "large" is similar to the rank method of
communication complexity. The technique we use next is also similar to methods used in variable partition
communication complexity. For background see, e.g., [36, 35].
Denote by A the set of indices from that appear among the first k indices
under the order -. Denote by B the set of indices i such that appears among the
last k indices under the order -. Both A and B are subsets of and by the
assumption, jAj; jBj - k=2. Define A Ag. We now show that for some t
the size of A t " B is \Omega\Gamma k). For this, write
Let t 0 be such that
" B has size jSj - k=4. Denote by G the matrix corresponding
to g 2k;k;t 0
. In particular let G 0 be the submatrix of G with rows that are all strings x of
length k (according to the order -) whose bits out of S are fixed to 0's and with columns
that are all strings y of length k whose bits which are not of the
are fixed to 0's. This matrix is the same matrix obtained in the proof for f k;k=2 whose rank
is therefore 2 k=2 \Gamma 1.
Corollary 5.2 The following classes are not learnable as automata (over any field K) even
if the best order is known:
1. DNF.
2. Monotone DNF.
3. 3-DNF.
4. k-term DNF, for
5. Satisfy-s DNF, for
--R
Exact learning of read-twice DNF formulas
Exact learning of read-k disjoint DNF and not-so-disjoint DNF
Learning k-term DNF formulas using queries and counterexamples
Learning regular sets from queries and counterexamples.
Machine Learning
Learning read-once formulas with queries
On the applications of multiplicity automata in learning.
Learning boxes in high dimension.
A deterministic algorithm for sparse multivariate polynomial interpolation.
Learning sat-k-DNF formulas from membership queries
Learning behaviors of automata from multiplicity and equivalence queries.
Learning behaviors of automata from multiplicity and equivalence queries.
Learning behaviors of automata from shortest coun- terexamples
Rational Series and Their Languages
On learning read-k-satisfy- j DNF
Fast learning of k-term DNF formulas with queries
Exact learning via the monotone theory.
A note on learning multivariate polynomials under the uniform distribu- tion
Simple learning algorithms using divide and conquer.
Learning matrix functions over rings.
Realization by stochastic finite automaton.
On zero-testing and interpolation of k-sparse multivariate polynomials over finite fields
Matrices de Hankel.
Learning unions of boxes with membership and equivalence queries.
Fast parallel algorithms for sparse multivariate polynomial interpolation over finite fields.
Learning 2- DNF formulas and k- decision trees
Interpolation of sparse multivariate polynomials over large finite fields with applications.
An efficient membership-query algorithm for learning DNF with respect to the uniform distribution
An Introduction to Computational Learning Theory.
A simple algorithm for learning O(log n)-term DNF
Learning decision trees using the Fourier spectrum.
Communication Complexity.
VLSI theory.
On the complexity of learning from counterexamples.
Algorithms and lower bounds for on-line learning of geometrical concepts
Efficient learning with virtual threshold gates.
A polynomial time learning algorithm for recognizable series.
Inference of finite automata using homing sequences.
Interpolation and approximation of sparse multivariate polynomials over GF (2).
Learning sparse multivariate polynomials over a field with queries and counterexamples.
Fast probabilistic algorithms for verification of polynomial identities.
A theory of the learnable.
Interpolating polynomials from their values.
Efficient Polynomial Computation.
--TR
A theory of the learnable
Learning regular sets from queries and counterexamples
Rational series and their languages
A deterministic algorithm for sparse multivariate polynomial interpolation
Interpolating polynomials from their values
Introduction to algorithms
Fast parallel algorithms for sparse multivariate polynomial interpolation over finite fields
Interpolation and approximation of sparse multivariate polynomials over GF(2)
Learning 2u DNF formulas and <italic>ku</italic> decision trees
VLSI theory
On zero-testing and interpolation of <inline-equation> <f> k</f> </inline-equation>-sparse multivariate polynomials over finite fields
Exact learning of read-twice DNF formulas (extended abstract)
On-line learning of rectangles
Random DFA''s can be approximately learned from sparse uniform examples
Exact learning of read-<italic>k</italic> disjoint DNF and not-so-disjoint DNF
Learning read-once formulas with queries
C4.5: programs for machine learning
Learning decision trees using the Fourier spectrum
Cryptographic hardness of distribution-specific learning
On-line learning of rectangles in noisy environments
Cryptographic limitations on learning Boolean formulae and finite automata
Inference of finite automata using homing sequences
On learning Read-<italic>k</italic>-Satisfy-<italic>j</italic> DNF
Learning unions of boxes with membership and equivalence queries
Algorithms and Lower Bounds for On-Line Learning of Geometrical Concepts
An introduction to computational learning theory
Read-twice DNF formulas are properly learnable
Fast learning of <italic>k</italic>-term DNF formulas with queries
Exact learning Boolean functions via the monotone theory
A note on learning multivariate polynomials under the uniform distribution (extended abstract)
Learning sparse multivariate polynomials over a field with queries and counterexamples
Learning Sat-<italic>k</italic>-DNF formulas from membership queries
Learning Behaviors of Automata from Multiplicity and Equivalence Queries
Simple learning algorithms using divide and conquer
A simple algorithm for learning O (log <italic>n</italic>)-term DNF
Communication complexity
An efficient membership-query algorithm for learning DNF with respect to the uniform distribution
The art of computer programming, volume 2 (3rd ed.)
Efficient learning with virtual threshold gates
Interpolation of sparse multivariate polynomials over large finite fields with applications
Fast Probabilistic Algorithms for Verification of Polynomial Identities
Automata, Languages, and Machines
Induction of Decision Trees
Queries and Concept Learning
Probabilistic algorithms for sparse polynomials
Learning behaviors of automata from shortest counterexamples
Simple learning algorithms for decision trees and multivariate polynomials
On the applications of multiplicity automata in learning
--CTR
Amir Shpilka, Interpolation of depth-3 arithmetic circuits with two multiplication gates, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Nader H. Bshouty , Lynn Burroughs, On the proper learning of axis-parallel concepts, The Journal of Machine Learning Research, 4, p.157-176, 12/1/2003
Lane A. Hemaspaandra, SIGACT News complexity theory column 32, ACM SIGACT News, v.32 n.2, June 2001
Ricard Gavald , Pascal Tesson , Denis Thrien, Learning expressions and programs over monoids, Information and Computation, v.204 n.2, p.177-209, February 2006 | DNF;learning disjoint;multiplicity automata;computational learning;learning polynomials |
337358 | An inheritance-based technique for building simulation proofs incrementally. | This paper presents a technique for incrementally constructing safety specifications, abstract algorithm descriptions, and simulation proofs showing that algorithms meet their specifications.The technique for building specifications (and algorithms) allows a child specification (or algorithm) to inherit from its parent by two forms of incremental modification: (a) interface extension, where new forms of interaction are added to the parent's interface, and (b) specialization (subtyping), where new data, restrictions, and effects are added to the parent's behavior description. The combination of interface extension and specialization constitutes a powerful and expressive incremental modification mechanism for describing changes that do not override the behavior of the parent, although it may introduce new behavior.Consider the case when incremental modification is applied to both a parent specification S and a parent algorithm A. A proof that the child algorithm A implements the child specification S can be built incrementally upon simulation proof that algorithm A implements specification S. The new work required involves reasoning about the modifications, but does not require repetition of the reasoning in the original simulation proof.The paper presents the technique mathematically, in terms of automata. The technique has already been used to model and validate a full-fledged group communication system (see [26]); the methodology and results of that experiment are summarized in this paper. | INTRODUCTION
Formal modeling and validation of software systems is
a major challenge, because of their size and complex-
ity. Among the factors that could increase widespread
usage of formal methods is improved cost-effectiveness
and scalability (cf. [20, 22]). Current software engineering
practice addresses problems of building complex systems
by the use of incremental development techniques
based on an object-oriented approach. We believe that
successful efforts in system modeling and validation will
also require incremental techniques, which will enable
reuse of models and proofs.
In this paper we provide a framework for reuse of
proofs analogous and complementary to the reuse provided
by object-oriented software engineering method-
ologies. Specifically, we present a technique for incrementally
constructing safety specifications, abstract algorithm
descriptions, and simulation proofs that algorithms
specifications. Simulation proofs are
one of the most important techniques for proving properties
of complex systems; such proofs exhibit a simulation
relation (refinement mapping, abstraction func-
tion) between a formal description of a system and its
specification [13, 24, 29].
The technique presented in this paper has evolved with
our experience in the context of a large-scale modeling
and validation project: we have successfully used
this technique for modeling and validating a complex
group communication system [26] that is implemented
in C++, and that interacts with two other services developed
by different teams. The group communication
system acts as middleware in providing tools for building
distributed applications. In order to be useful for
a variety of applications, the group communication system
provides services with diverse semantics that bear
many similarities, yet differ in subtle ways. We have
modeled the diverse services of the system and validated
the algorithms implementing each of these ser-
vices. Reuse of models and proofs was essential in order
to make this task feasible. For example, it has allowed
us to avoid repeating the five-page long correctness
proof of the algorithm that provides the most basic
semantics when proving the correctness of algorithms
that provide the more sophisticated semantics. The correctness
proof of the most sophisticated algorithm, by
comparison, was only two and a half pages long. (We
describe our experience in this project as well as the
methodology that evolved from it in Section 6.)
Our approach to the reuse of specifications and algorithms
through inheritance uses incremental modification
to derive a new component (specification or algo-
rithm), called child , from an existing component called
parent . Specifically, we present two constructions for
modifying existing components:
1. We allow the child to specialize the parent by
reusing its state in a read-only fashion, by adding
new state components (read/write), and by constraining
the set of behaviors of the parent. This
corresponds to the subtyping view of inheritance [8].
We will show that any observable behavior of the
child is subsumed (cf. [1]) by the possible behaviors
of the parent, making our specialization analogous
to the substitution inheritance [8]. In particular,
the child can be used anywhere the parent can be
used. (Specialization is the subject of Section 3.)
2. A child can also be derived from a parent by means
of interface (signature) extension. In this case the
state of the parent is unchanged, but the child may
include new observable actions not found in the
parent and new parameters to actions that exist
at the parent. When such new actions and parameters
are hidden, then any behavior of the child is
exactly as some behavior of the parent. (Interface
extension is presented in Section 5.)
When interface extension is combined with specializa-
tion, this corresponds to the subclassing for extension
form of inheritance [8] which provides a powerful mechanism
for incrementally constructing specifications and
algorithms. Consider the following example. The parent
defines an unordered messaging service using the
send and recv primitives. To produce a totally ordered
messaging service we specialize the parent in such a way
that recv is only possible when the current message is
totally ordered. Next we introduce the safe primitive,
which informs the sender that its message was deliv-
ered. First we extend the service interface to include
safe primitives and then we specialize to enable safe actions
just in case the message was actually delivered.
The specialization and extension constructs can be applied
at both the specification level and the algorithm
level in a way that preserves the relationship between
the specification and the algorithm. The main technical
challenge addressed in this paper (in Section 4) is the
provision of a formal framework for the reuse of simulation
proofs especially for the specialization construct.
Consider the example in Figure 1: Let S be a specifica-
tion, and A an abstract algorithm description. Assume
that we have proven that A implements S using a simulation
relation R p
. Assume further that we specialize
the specification S, yielding a new child specification S 0 .
At the same time, we specialize the algorithm A to construct
an algorithm A 0 which supports the additional
semantics required by S 0 .
Figure
Algorithm A simulates specification S with
. Can R p
be reused for building a simulation R c
from
a child A 0 of A to a child S 0 of S?
A
A'
simulation
simulation
Rp
inheritance
inheritance
When proving that A 0 implements S 0 , we would like
to rely on the fact that we have already proven that
A implements S, and to avoid the need to repeat the
same reasoning. We would like to reason only about
the new features introduced by S 0 and A 0 . The proof
extension theorem in Section 4 provides the means for
incrementally building simulation proofs in this manner.
Simulation proofs [13] lend themselves naturally to be
supported by interactive theorem provers. Such proofs
typically break down into many simple cases based on
different actions. These can be checked by hand or with
the help of interactive theorem provers. Our incremental
simulation proofs break down in a similar fashion.
We present our incremental modification constructs in
the context of the I/O automata model [30, 32] (the
basics of the model are reviewed in Section 2). I/O
automata have been widely used in formulating formal
service definitions and abstract implementations, and
for reasoning about them, e.g., [6, 9, 11, 12, 14, 15, 21,
24, 28, 31]). An important feature of the I/O automaton
formalism is its strong support of composition. For
example, Hickey et al. [24] used the compositional approach
for modeling and verification of certain modules
in Ensemble [19], a large-scale, modularly structured,
group communication system. Introducing inheritance
into the I/O automaton model is vital in order to push
the limits of such projects from verification of individual
modules to verification of entire systems, as we have
experienced in our work on such a project [26]. Further-
more, a programming and modeling language based on
I/O automata formalism, IOA [17, 18] has been defined.
We intend to exploit the IOA framework, to develop
IOA-based tools to support the techniques presented in
this paper both for validation and for code generation.
Stata and Guttag [36] have recognized the need for reuse
in a manner similar to that suggested in this paper,
which facilitates reasoning about correctness of a sub-class
given the correctness of the superclass is known.
They suggest a framework for defining programming
guidelines and supplement this framework with informal
rules that may be used to facilitate such reason-
ing. However, they only address informal reasoning and
do not provide the mathematical foundation for formal
proofs. Furthermore, [36] is restricted to the context of
sequential programming and does not encompass reactive
components as we do in this paper.
Many other works, e.g., [1, 6, 10, 23, 25, 33], have formally
dealt with inheritance and its semantics. Our distinguishing
contribution is the provision of a mathematical
framework for incremental construction of simulation
proofs by applying the formal notion of inheritance
at two levels: specification and algorithm.
This section presents background on the I/O automaton
model, based on [30], Ch. 8. In this model, a system
component is described as a state-machine, called an
I/O automaton. The transitions of the automaton are
associated with named actions, classified as input, output
and internal. Input and output actions model the
component's interaction with other components, while
internal actions are externally unobservable.
Formally, an I/O automaton A consists of: an interface
(or signature), sig(A), consisting of input, output and
internal actions; a set of states, states(A); a set of start
states, start(A); and a state-transition relation (a sub-set
of states(A) \Thetasig(A) \Thetastates(A)), trans(A).
An action is said to be enabled in a state s if the automaton
has a transition of the form (s, , s'); input actions
are enabled in every state. An execution of an automaton
is an alternating sequence of states and actions
that begins with a start state, and successive triples are
allowable transitions. A trace is a subsequence of an
execution consisting solely of the automaton's external
actions. The I/O automaton model defines a composition
operation which specifies how automata interact
via their input and output actions.
I/O automata are conveniently presented using the
precondition-effect style. In this style, typed state variables
with initial values specify the set of states and the
start states. Transitions are grouped by action name,
and are specified using a pre: block with preconditions
on the states in which the action is enabled and an eff:
block which specifies how the pre-state is modified. The
effect is executed atomically to yield the post-state.
Simulation Relations
When reasoning about an automaton, we are only interested
in its externally-observable behavior as reflected in
its traces. A common way to specify the set of traces an
automaton is allowed to generate is using (abstract) I/O
automata that generate the legal sets of traces. An implementation
automaton satisfies a specification if all of
its traces are also traces of the specification automaton.
Simulation relations are a commonly used technique for
proving trace inclusion:
Definition 2.1 Let A and S be two automata with the
same external interface. Then a relation R ' states(A)
\Theta states(S) is a simulation from A to S if it satisfies
the following two conditions:
1. If t is any initial state of A, then there is an initial
state s of S such that s 2 R(t).
2. If t and s 2 R(t) are reachable states of A and
S respectively, and if (t; ; t 0 ) is a step of A, then
there exists an execution fragment of S from s to
having the same trace, and with s
The following theorem emphasizes the significance of
simulation relations. (It is proven in [30], Ch. 8.)
Theorem 2.1 If A and S are two automata with the
same external interface and if R is a simulation from A
to S then traces(A) ' traces(S).
The simulation relation technique is complete: any finite
trace inclusion can be shown by using simulation
relations in conjunction with history and prophecy variables
[2, 35].
Our specialization construct captures the notion of sub-typing
in I/O automata in the sense of trace inclusion;
it allows creating a child automaton which specializes
the parent automaton. The child can read the parent's
state, add new (read/write) state components, and restrict
the parent's transitions. The specialize construct
defined below operates on a parent automaton, and accepts
three additional parameters: a state extension -
the new state components, an initial state extension -
the initial values of the new state components, and a
transition restriction which specifies the child's addition
of new preconditions and effects (modifying new state
components only) to parent transitions. We define the
specialization construct formally below.
Definition 3.1 Let A be an automaton; let N be a set
of states, called a state extension; let N 0
be a non-empty
subset of N, called an initial state extension; let
TR ' (states(A) \Theta N) \Theta sig(A) \Theta N be a relation,
called a transition restriction. For each action , TR
specifies the additional restrictions that a child places
on the states of A and N in which is enabled and specifies
how the new state components are modified as a
result of a child taking a step involving .
Then specialize(A)(N; defines an automaton A 0
as follows:
Notation 3.2 If A use
the following
to denote its parent component and tj n
to denote its
new component. If ff is an execution sequence of A 0 ,
then ffj p
denotes a sequence obtained by replacing
each state t in ff with tj p
We also extend
this notation to sets of states and to sets of execution
sequences.
We now exemplify the use of the specialization con-
struct. Figure 2 presents a simple algorithm automaton,
write through cache, implementing a sequentially-consistent
register x shared among a set of processes P.
Each process has access to a local cache p
. Register
x is initialized to some default value v 0
. A write p
(v)
request propagates v to both x and cache p
. A response
read p
(v) to a read request returns the value v of p's
local cache p
without ensuring that it is current. Thus,
a process p responds to a read request with a value of x
which is at least as current as the last value previously
seen by p but not necessarily the most up-to-date one.
Figure
3 presents an atomic write-through cache au-
tomaton, atomic write through cache, as a specialization
of write through cache. The specialized
automaton maintains an additional boolean variable
synched p
for each process p in order to restrict
the behavior of the parent so that a response to a read
request returns the latest value of x. The traces of this
automaton are indistinguishable from those of a system
with a single shared register and no cache.
In general, the transition restriction denoted by this
type of precondition-effect code is the union of the following
two sets:
ffl All triples of the form (t; ; tj n
) for which is
not mentioned in the code for A 0 , i.e., A 0 does not
Figure
Write-through cache automaton.
automaton write through cache
Signature:
Input: write p (v)
read
Output: read p (v)
synch p ()
State:
Transitions:
cache p
INTERNAL synch p ()
eff: cache p
INPUT read req p ()
OUTPUT read p (v)
pre:
Figure
3 Atomic write-through cache automaton.
automaton atomic write through cache
modifies write through cache
State Extension:
initially true
Transition Restriction:
eff: synched q
INTERNAL synch p ()
eff: synched p
true
OUTPUT read p (v)
pre: synched p
true
restrict transitions involving . The read req p
ac-
tion of Figure 2 is an example of such a . Note
that the new state component, tj n
, is not changed.
) in which state t satisfies new
preconditions on placed by A 0 and in which state
is the result of applying 's new effects to t.
Theorem 3.1 below says that every trace of the specialized
automaton is a trace of the parent automaton.
In Section 4, we demonstrate how proving correctness
of automata presented using the specialization operator
can be done as incremental steps on top of the correctness
proofs of their parents.
Theorem 3.1 If A 0 is a child of an automaton A, then:
1.
execs(A).
2.
Proof 3.1:
1. Straightforward induction on the length of the execution
sequence. Basis: If t 2
by the definition of start(A 0 ).
Inductive Step: If (t; ; t 0 ) is a step of A 0 , then
) is a step of A, by the definition of
2. Follows from Part 1 and the fact that
sig(A). Alternatively, notice that trace inclusion
is implied by Theorem 2.1 and the fact that the
function that maps a state t 2
is a simulation mapping from A 0 to A.
The formalism we have introduced allows not only for
code reuse, but also, as we show in this section, for proof
reuse by means of incremental proof construction. We
start with an example, then we prove a general theorem.
An Example of Proof Reuse
We now revisit the shared register example of Section
3. We present a parent specification of a
sequentially-consistent shared register, and describe a
simulation that proves that it is implemented by the
write through cache automaton presented in the
previous section. We then derive a child specification
of an atomic shared register by specializing the parent
specification. Finally, we illustrate how a proof that automaton
atomic write through cache implements
the child specification can be constructed incrementally
from the parent-level simulation proof.
Figure
4 presents a standard specification of a
sequentially-consistent shared register x. The interface
of seq consistent register is the same as that of
through cache. The specification maintains
a sequence hist-x of the values stored in x during an
execution. A write p
(v) request appends v to the end
of hist-x. A response read p
(v) to a read request is allowed
to return any value v that was stored in x since p
last accessed x; this nondeterminism is an innate part of
sequential consistency. The specification keeps track of
these last accesses with an index last p
in the hist-x.
We argue that automaton write through cache
of
Figure
2 satisfies this specification by exhibiting
a simulation relation R. R relates a state
t of write through cache to a state s of
seq consistent register as follows:
(t, s) 2 R ()
2 Integer) such that
-s.hist-x-
step of write through cache initiating
from state t and involving read p
(v) simulates a
Figure
4 Sequentially consistent shared register specification
automaton.
automaton seq consistent register
Signature:
Input: write p (v)
read
Output: read p (v)
State:
last p
Transitions:
eff: append v to hist-x
last p
INPUT read req p ()
OUTPUT read p (v) choose i
pre:
last p
eff: last p / i
step of seq consistent register which initiates from
s and involves read p
(v) choose hi p
, where hi p
is the
number whose existence is implied by the simulation
relation R. Steps of write through cache involving
read
(v) actions simulate steps of
seq consistent register with the respective actions.
It is straightforward to prove that R satisfies the two
conditions of a simulation relation (Definition 2.1). We
are not interested in the actual proof, but only in reusing
it, i.e., avoiding the need to repeat it.
For the purpose of illustrating proof reuse, we present in
Figure
5 a specification of an atomic shared register as a
specialization of seq consistent register. The child
restricts the allowed values returned by read p
(v) to the
current value of x by restricting the non-deterministic
choice of i to be the index of the latest value in hist-x.
Figure
5 Atomic shared register specification.
automaton atomic register
modifies seq consistent register
Transition Restriction:
OUTPUT read p (v) choose i
pre:
We want to reuse the simulation R to prove that automaton
atomic write through cache implements
atomic register. Since atomic register does not
extend the states of seq consistent register, the
simulation relation does not need to be extended, and
it works as is. In general, one may need to extend the
simulation relation to capture how the imple-
mentation's state relates to the new state added by the
specification's child.
To prove that R is also a simulation relation from
the child algorithm atomic write through cache
to the child specification atomic register we have to
show two things:
First, we have to show that initial states of
atomic write through cache relate to the initial
states of atomic register. In general, as we prove
in Theorem 4.1 below, we need to check the new variables
added by the specification child. We need to show
that, for any initial state of the implementation, there
exists a related assignment of initial values to these new
variables. In our example, since atomic register does
not add any new state, we get this property for free.
Second, we need to show that whenever R simulates
a step of seq consistent register, this step is still
a valid transition in atomic register. As implied
by Theorem 4.1, we only have to check that the new
preconditions placed by atomic register on transitions
of seq consistent register are still satisfied
and that the extension of the simulation relation is pre-
served. Since in our example atomic register does
not add any new state variables, we only need to show
the first condition: whenever read p
(v) choose i is
simulated in atomic register, the new precondition
"i holds.
Recall that, when read p
(v) choose i is simulated in
atomic register, i is chosen to be hi p
. For this
simulation to work, we need to prove that it is always
possible to choose hi p
to be -hist-x-. This
follows immediately from the added precondition in
atomic write through cache, which requires that
read p
(v) only occurs when synched p
true, and
from the following simple invariant. (This invariant can
be proven by straightforward induction.)
Invariant 4.1 In any reachable state t of atomic -
write through cache:
true =) t.cache p
Proof Extension Theorem
We now present the theorem which lays the foundation
for incremental proof construction. Consider the
example illustrated in Figure 1, where a simulation relation
from an algorithm A to a specification S is
given, and we want to construct a simulation relation
R c
from a specialized version A 0 of an automaton A to
a specialized version S 0 of a specification automaton S.
In Theorem 4.1 we prove that such a relation R c
can be
constructed by supplementing R p
with a relation R n
that
relates the states of A 0 to the state extension introduced
by S 0 . Relation R n
has to relate every initial state of A 0
to some initial state extension of S 0 and it has to satisfy
a step condition similar to the one in Definition 2.1, but
only involving the transition restriction relation of S 0 .
Theorem 4.1 Let automaton A 0 be a child of automaton
A. Let automaton S 0 be a child of automaton S such
that
be a
simulation from A to S. Let R n
A relation R c
defined in
terms of R p
and R n
as
is a simulation from A 0 to S 0 if R c
satisfies the following
two conditions:
1. For any t 2 exists a state sj nR n
(t) such that sj n
2. If t is a reachable state of A 0 , s is a reachable state
of S 0 such that sj p
(t), and
a step of A 0 , then there exists a finite
sequence ff of alternating states and actions of S 0 ,
beginning from s and ending at some state s 0 , and
satisfying the following conditions:
(a) ffj p
is an execution sequence of S.
(e) ff has the same trace as
Proof 4.1: We show that R c
satisfies the two conditions
of Definition 2.1:
1. Consider an initial state t of A 0 . By the fact that
is a simulation, there must exist a state sj pR p
) such that sj p
start(S). By property 1,
there must exist a state sj n
(t) such that
. Consider state
is in R c
(t) by definition. Also,
start(S) \Theta N 0
use the fact
that start(S 0
(Def. 3.1).
2. First, notice that the assumption on state s and
relation R c
imply that s 2 R c
(t) and that properties
2c and 2d imply that s 0
Next, we show that ff is an execution sequence of
with the right trace. Indeed, every step of ff is
consistent with trans(S) (by 2a) and is consistent
with TR (by 2b). Therefore, by definition of
(Def. 3.1), every step of ff is consistent
with In other words, ff is an execution
sequence of S 0 which starts with state R c
ends
with state R c
and has the same trace
as
In practice, one would exploit this theorem as follows:
The simulation proof between the parent automata already
provides a corresponding execution sequence of
the parent specification for every step of the parent al-
gorithm. It is typically the case that the same execution
sequence, padded with new state variables, corresponds
to the same step at the child algorithm. Thus, conditions
2a, 2c, and 2e of Theorem 4.1 hold for this se-
quence. The only conditions that have to be checked
are 2b, and 2d, i.e., that every step of this execution
sequence is consistent with the transition restriction TR
placed on S by S 0 and that the values of the new state
variables of S 0 in the final state of this execution are
related to the post-state of the child algorithm.
Note that, we can state a specialized version of Theorem
4.1 for the case of three automata, A, S, and S 0 , by
letting A 0 be the same as A. This version would be useful
when we know that algorithm A simulates specification
S, and we would like to prove that A can also simulate a
child S 0 of S. The statement and the proof of this specialized
version are the same as those of Theorem 4.1,
except there is no child A 0 of A must be
substituted for A 0 and t for tj p
. In fact, given this specialized
version, Theorem 4.1 then follows from it as a
corollary because the relation fht;
g is
a simulation relation from A 0 to S, and the specialized
theorem applies to automata A 0 , S, and S 0 .
Interface extension is a formal construct for altering the
interface of an automaton and for extending it with new
forms of interaction.
For technical reasons, it is convenient to assume that the
interface of every automaton contains an empty action
ffl and that its state-transition relation contains empty-
transitions: i.e., if A is an automaton, then
An interface extension of an automaton is defined using
an interface mapping function that translates the new
(child) interface to the original (parent) interface. New
actions added by the child are mapped to the empty
action ffl at the parent. The child's states and start
states are the same as those of the parent. The state-transition
of the child consists of all the parent's transi-
tions, renamed according to the interface mapping. In
particular, the state-transition includes steps that do
not change state but involve the new actions (those that
map to ffl).
Definition 5.1 Automaton A 0 is an interface-
extension of an automaton A if
states(A), and if there exists
a function f, called interface-mapping 1 , such that
1. f is a function from
that f can map non-ffl actions of A 0 to ffl (these are
the new actions added by A 0 ) and is also allowed to
be many-to-one.
2. f preserves the classification of actions as "input",
"output", and "internal". That is, if 2
is an input action, and f() 6= ffl, then f() is also
an input action; likewise, for output and internal
actions.
3.
Notation 5.2 Let A 0 be an interface-extension of A
with an interface-mapping f.
If ff is an execution sequence of A 0 , then ffj f denotes a
sequence obtained by replacing each action in alpha
with f(), and then collapsing every transition of the
form (s; ffl; s) to s.
Likewise, if fi is a trace of A 0 , then fij f denotes a
sequence obtained by replacing each action in fi to
f(), and by subsequently removing all the occurrences
of ffl.
The following theorem formalizes the intuition that the
sets of executions and traces of an interface-extended
automaton are equivalent to the respective sets of
the parent automaton, modulo the interface-mapping.
The proof is straightforward by induction using Definition
5.1 and Notation 5.2.
Theorem 5.1 Let automaton A 0 be an interface extension
of A with an interface-mapping f.
Let ff be a sequence of alternating states and actions of
A 0 and let fi be a sequence of external actions of A 0 .
Then:
1. ff 2
2.
When interface extension is followed by the specialization
modification, the resulting combination corresponds
to the notion of modification by subclassing for
extension [8]. The resulting child specializes the parent's
behavior and introduces new functionality. Specifically,
a specialization of an interface-extended automaton may
add transitions involving new state components and new
interface. The generalized definition of the parent-child
relationship is then as follows:
1 Interface-mapping is similar to strong correspondence of [38].
Definition 5.3 Automaton A 0 is a child of an automaton
A if A 0 is a specialization of an interface extension
of A.
Theorem 5.1 enables the use of the proof extension theorem
(Theorem 4.1) for this parent-child definition, once
the child's actions are translated to the parent's actions
using the interface mapping of Definition 5.1.
6 PRACTICAL EXPERIENCE WITH INCREMENTAL
PROOFS
In this section we describe our experience designing
and modeling a complex group communications service
(see [26]), and how the framework presented in this paper
was exploited. We then describe an interesting modeling
methodology that has evolved with our experience
in this project.
Group communication systems (GCSs) [3, 37] are powerful
building blocks that facilitate the development of
fault-tolerant distributed applications. GCSs typically
provide reliable multicast and group membership ser-
vices. The task of the membership service is to maintain
a listing of the currently active and connected processes
and to deliver this information to the application whenever
it changes. The output of the membership service
is called a view. The reliable multicast services deliver
messages to the current view members.
Traditionally, GCS developers have concentrated primarily
on making their systems useful for real-world
distributed applications such as data replication
(e.g., [16]), highly available servers (e.g., [5]) and collaborative
computing (e.g., [7]). Formal specifications
and correctness proofs were seldom provided. Many
suggested specifications were complicated and difficult
to understand, and some were shown to be ambiguous
in [4]. Only recently, the challenging task of specifying
the semantics and services of GCSs has become an
active research area.
The I/O automaton formalism has been recently exploited
for specifying and reasoning about GCSs (e.g.,
in [9, 11, 12, 15, 24, 28]). However, all of these suggested
I/O automaton-style specifications of GCSs used
a single abstract automaton to represent multiple properties
of the same system component and presented a
single algorithm automaton that implements all of these
properties. Thus, no means were provided for reasoning
about a subset of the properties, and it was often difficult
to follow which part of the algorithm implements
which part of the specification. Each of these papers
dealt with proving correctness of an individual service
layer and not with a full-fledged system.
In [26], we modeled a full-fledged example spanning
the entire virtually synchronous reliable group multi-cast
service. We provided specifications, formal algorithm
descriptions corresponding to our actual C++
implementation, and also simulation proofs from the algorithms
to the specifications. We employed a client-server
We presented a virtually synchronous
group multicast client that interacts with an external
membership server. Our virtually synchronous group
multicast client was implemented using approximately
6000 lines of C++ code. The server [27] was developed
by another development team also using roughly 6000
lines of C++ code. Our group multicast service also
exploits a reliable multicast engine which was implemented
by a third team [34] using 2500 lines of C++
code.
We sought to model the new group multicast service in
a manner that would match the actual implementation
on one hand, and would allow us to verify that the algorithms
their specifications on the other hand. In
order to manage the complexity of the project at hand
we found a need for employing an object-oriented approach
that would allow for reuse of models and proofs,
and would also correspond to the implementation, which
in turn, would reuse code and data structures.
In [26], we used the I/O automaton formalism with the
inheritance-based incremental modification constructs
presented in this paper to specify the safety properties
of our group communication service. We specified four
abstract specification automata which capture different
GCS properties: We began by specifying a simple GCS
that provides reliable fifo multicast within views. We
next used the new inheritance-based modification construct
to specialize the specification to require also that
processes moving together from one view to another deliver
the same set of messages in the former. We then
specialized the specification again to also capture the
Self Delivery property which requires processes to deliver
their own messages. The fourth automaton specified
a stand-alone property (without inheritance) which
augments each view delivery with special information
called transitional set [37].
We then proceeded to formalize the algorithms implementing
these specifications. We first presented an algorithm
for within-view reliable fifo multicast and provided
a five page long formal simulation proof showing
that the algorithm implements the first specification.
Next, we presented a second algorithm as an extension
and a specialization of the first one. In the second al-
gorithm, we restricted the parent's behavior according
to the second specification, i.e., we added the restriction
that processes moving together from one view to
another deliver the same set of messages in the former.
Additionally, in the second algorithm, we extended the
service interface to convey transitional sets, and added
the new functionality for providing clients with transitional
sets as per the fourth specification. By exploiting
Theorem 4.1, we were able to prove that the second algorithm
implements the second specification (and therefore
also the first one) in under two pages without needing
to repeat the arguments made in the previous five
page proof. We separately proved that the algorithm
meets the fourth specification. Finally, we extended and
specialized the second algorithm to support the third
property. Again, we exploited Theorem 4.1 in order to
prove that the final algorithm meets the third specification
(and hence all four specifications) in a merely two
and a half page long proof.
We are currently continuing our work on group commu-
nication. We are incrementally extending the system
described in [26] with new services and semantics using
the same techniques.
A Modeling Methodology
Specialization does not allow children to introduce behaviors
that are not permitted by their parents and does
not allow them to change state variables of their par-
ents. However, when we modeled the algorithms in [26],
in one case we saw the need for a child algorithm to
modify a parent's variable. We dealt with this case by
introducing a certain level of non-determinism at the
parent, thereby allowing the child to resolve (specialize)
this nondeterminism later.
In particular, the algorithm that implemented the second
specification described above sometimes needed to
forward messages to other processes, although such forwarding
was not needed at the parent. The forwarded
messages would have to be stored at the same buffers
as other messages. However, these message buffers were
variables of the parent, so the child was not allowed
to modify them. We solved this problem by adding a
forwarding action which would forward arbitrary messages
to the parent automaton; the parent stored the
forwarded messages in the appropriate message buffers.
The child then restricted this arbitrary message forwarding
according to its algorithm.
We liken this methodology to the use of abstract methods
or pure virtual methods in object-oriented methodol-
ogy, since the non-determinism is left at the parent as a
"hook" for prospective children to specify any forwarding
policy they might need. In our experience, using
this methodology did not make the proofs more complicated
7 DISCUSSION
We described a formal approach to incrementally defining
specifications and algorithms, and incorporated an
inheritance-based methodology for incrementally constructing
simulation proofs between algorithms and
specifications. This technique eliminates the need to
repeat arguments about the original system while proving
correctness of a new system.
We have successfully used our methodology in specifying
and proving correct a complex group communication
service [26]. We are planning to experiment with our
methodology in order to prove other complex systems.
We have presented the technique mathematically, in
terms of I/O automata. Furthermore, the formalism
presented in this paper and the syntax of incremental
modification is consistent with the continued evolution
of the IOA programming and modeling language. Since
IOA is being developed as a practical programming
framework for distributed systems, one of our goals is
to incorporate our inheritance-based modification technique
and approach to proof reuse into the IOA programming
language toolset [17, 18].
Future plans also include extending our proof-reuse
methodology to a construct that allows a child to modify
the state variables of its parent. Other future plans
include adding the ability to deal with multiple inher-
itance. In all of our work, we aim to formulate and
extend formal specification techniques that would be
useful for practical software development.
ACKNOWLEDGMENTS
We thank Paul Attie, Steve Garland, Victor Luchangco
and Jens Palsberg for their helpful comments and suggestions
--R
A Theory of Objects.
The existence of refinement mappings.
ACM 39(4)
On the formal specification of group membership ser- vices
Fault tolerant video- on-demand services
An object-oriented approach to verifying group communication systems
Middleware support for distributed multimedia and collaborative computing.
An Introduction to Object-Oriented Program- ming
An Adaptive Totally Ordered Multi-cast Protocol that Tolerates Partitions
A denotational semantics of inheritance and its correctness.
A dynamic primary configuration group communication service.
Data Refinement Model-Oriented Proof Methods and their Comparison
Specifying and using a partionable group communication ser- vice
Fast replicated state machines over partitionable networks.
Foundations of Component Based Systems
IOA: A Language for Specifying
Optimizing Layered Communication Protocols.
Formal methods for developing high assurance computer systems: Working group report.
The generalized railroad crossing: A case study in formal verification of real-time systems
On the need for 'practical' formal methods.
Wrapper semantics of an object-oriented programming language with state
Specifications and proofs for ensemble layers.
Inheritance in Smalltalk-80: A denotational definition
A client-server approach to virtually synchronous group multicast: Specifica- tions
A Client-Server Oriented Algorithm for Virtually Synchronous Group Membership in WANs
Multicast group communication as a base for a load-balancing replicated data service
Generalizing Abstraction Functions.
Distributed Algorithms.
Robust emulation of shared memory using dynamic quorum-acknowledged broadcasts
An introduction to In- put/Output Automata
Objects as closures: Abstract semantics of object-oriented languages
Implementation of Reliable Datagram Service in the LAN environment.
Proving correctness with respect to nondeterministic safety specifications.
Modular reasoning in the presence of subclassing.
Group Communication Specifications: A Comprehensive Study.
I/O automaton model of operating system primitives.
--TR
Objects as closures: abstract semantics of object-oriented languages
Inheritance in smalltalk-80: a denotational definition
The existence of refinement mappings
Proving correctness with respect to nondeterministic safety specifications
A denotational semantics of inheritance and its correctness
Modular reasoning in the presence of subclassing
An introduction to object-oriented programming (2nd ed.)
Specifying and using a partitionable group communication service
A dynamic view-oriented group communication service
Eventually-serializable data services
Distributed Algorithms
A Theory of Objects
Data Refinement
Wrapper Semantics of an Object-Oriented Programming Language with State
Multicast Group Communication as a Base for a Load-Balancing Replicated Data Service
A Dynamic Primary Configuration Group Communication Service
Specifications and Proofs for Ensemble Layers
On the Need for Practical Formal Methods
Robust emulation of shared memory using dynamic quorum-acknowledged broadcasts
Fast Replicated State Machines Over Partitionable Networks
Formal Methods For Developing High Assurance Computer Systems
A Client-Server Approach to Virtually Synchronous Group Multicast
Optimizing Layered Communication Protocols
Fault Tolerant Video on Demand Services
A Client-Server Oriented Algorithm for Virtually Synchronous Group Membership in WANs
--CTR
Sarfraz Khurshid , Darko Marinov , Daniel Jackson, An analyzable annotation language, ACM SIGPLAN Notices, v.37 n.11, November 2002
Keidar , Roger I. Khazan , Nancy Lynch , Alex Shvartsman, An inheritance-based technique for building simulation proofs incrementally, ACM Transactions on Software Engineering and Methodology (TOSEM), v.11 n.1, p.63-91, January 2002 | system modeling/verification;specialization by inheritance;simulation;refinement;interface extension |
337371 | Towards the principled design of software engineering diagrams. | Diagrammatic specification, modelling and programming languages are increasingly prevalent in software engineering and, it is often claimed, provide natural representations which permit of intuitive reasoning. A desirable goal of software engineering is the rigorous justification of such reasoning, yet many formal accounts of diagrammatic languages confuse or destroy any natural reading of the diagrams. Hence they cannot be said to be intuitive. The answer, we feel, is to examine seriously the meaning and accuracy of the terms natural and intuitive in this context. This paper highlights, and illustrates by means of examples taken from industrial practice, an ongoing research theme of the authors. We take a deeper and more cognitively informed consideration of diagrams which leads us to a more natural formal underpinning that permits (i) the formal justification of informal intuitive arguments, without placing the onus of formality upon the engineer constructing the argument; and (ii) a principled approach to the identification of intuitive (and counter-intuitive) features of diagrammatic languages. | INTRODUCTION
Diagrammatic representations - and attempts to formalise
them - are an area of increasing attention
in modern software engineering. Visual specification
and modelling languages, most notably UML [21], and
domain-specific programming languages [15, 24] typically
have a strong diagrammatic flavour; software
architecture description languages (ADLs) and diagrams
[1, 2, 25] further add claims being of "natural"
and "intuitive" ways of thinking about (software) sys-
tems. Many such grand claims are made that diagrams
are "how we naturally think about systems". Yet, even
if we accept this as being at least partly true (and it is
clearly a debatable issue) it begs the questions of: what
kinds of diagrams are intuitive; why are they the natural
way to think about systems; and - most importantly for
the purposes of this paper - do the formal accounts typically
provided for such diagrammatic languages succeed
in accurately capturing whatever it is that is natural
and intuitive about such representations?
Taking seriously the assertion that diagrams are useful
because they are intuitive (well matched to meaning)
and natural (directly capture this well matched mean-
ing), then for any formal underpinning of diagrams in
software engineering to be truly useful it must therefore
reflect these intuitive and natural aspects of diagrams.
In this paper we demonstrate that such an approach is
feasible as well as desirable.
We commence in the next section with a review of the
critical issues in a theory of diagrammatic representa-
tion. In Section 3 we present a typical diagrammatic
software language, taken from industrial embedded control
software, to motivate and illustrate a more detailed
exploration of our concepts of natural and intuitive.
Section 4 then presents an simple formalisation of this
language. The use of formal methods is often advised
or required in the provision of evidence of desirable system
properties, yet in practice the application of such
methods is undeniably di#cult. In Section 5 we illustrate
how our approach provides a formal underpinning
of natural (previously informal) reasoning, in which the
onus of formality lies not with the engineer constructing
the argument, but with the designer of the original no-
tation. Section 6 illustrates that our general approach
also naturally permits the identification of questionable
or counter-intuitive features in diagrams. We conclude
with a summary of the major issues raised and an indication
of directions for future research.
Studies such as [5, 30, 33] have indicated that the
most e#ective representations are those which are well
matched to what they represent, in the context of particular
reasoning tasks. For the purposes of this paper
we assert that an "intuitive" representation is one which
is well matched. Furthermore, we assert that whether
a representation is "natural" concerns how it achieves
its intuitive matching; and (certain classes of) diagrammatic
representations are particularly good at naturally
matching their intuitive interpretations. Clearly, these
two assertions beg the questions of how such natural
matching are achieved and what are the intuitive meanings
which an e#ective representation matches.
Natural Representation in Diagrams
Previous studies have typically examined two di#er-
ing dimensions through which diagrammatic representations
"naturally" embody semantic information. Firstly
logical analyses such as [3, 12, 26, 27, 28] have examined
the inherent constraints of diagrams (topological, geo-
metric, spatial and so forth) to explicate their computational
benefits. The second dimension which has been
studied (particularly from an HCI perspective) concerns
features and properties which impact upon the cognition
of the user [4, 7, 16, 19, 29].
Recently a careful examination [9] of analogies (and dis-
analogies) between typical text-based languages and diagrammatic
languages has sought to unify the above
two dimensions of "naturalness". The examination was
quite revealing about both similarities and di#erences
in the textual and diagrammatic cases. One primary
di#erence with diagrams is that they may capture semantic
information in a very direct way. That is to
say, intrinsic features in the diagram, such as spatial
layout, directly capture aspects of the meaning of the
diagram. An understanding of how diagrams naturally
capture such aspects permits us next to consider what
specific information should be captured for a diagram
to be truly intuitive.
Intuitive Meaning in Diagrams
The decomposition in [9] of issues in how diagrams capture
information permitted the identification, in a subsequent
study [11], of the fundamental issues relating
to the e#ectiveness of visual and diagrammatic representations
for communication and reasoning tasks. As
indicated above, an e#ective representation is one which
is well matched to what it represents. That is, an in-
tuitive, or well matched, representation is one which
clearly captures the key features of the represented artifact
and furthermore simplifies various desired reasoning
tasks.
It has been demonstrated in [7, 20, 22] that pragmatic
features of diagrammatic representations (termed "sec-
ondary notations" by Green [6]) significantly influence
their interpretation. A particular concern of the exploration
of [11] was the importance of accounting for such
pragmatic aspects of diagrams in considering when they
are well matched.
Wait
Move
Start
Brake
Halt
true
Alarm
Checks
ChkFault
Fault
level=FloorCall
Stopped
Ready
Fault
Figure
1: Example SFC diagram (lift controller).
ING
A concrete application of our work is to the formalisation
of diagrammatic (graph-based) languages for industrial
embedded control software. This commonly occurring
class of systems spans many di#erent domains
(e.g. automotive, process control, ASIC design, mobile
telephony) and is a very common component of critical
systems. We have formalised various sub-languages
of Programmable Logic Controllers (PLC's [14, 17]) and
concentrate here on the PLC sub-language of Sequential
Function Charts (SFCs), which present a control-flow
view of an embedded controller.
The diagrammatic representation of SFCs is illustrated
in
Figure
1 and consists of elements of two distinct
kinds: rectangular boxes called steps and thick horizontal
lines called transitions. In such diagrams, no elements
of the same kind may be linked directly. Each
step is labelled by an identifier and may have an associated
action, and each transition carries a boolean
condition. The SFC of Figure 1 is a simplified lift controller
, adapted from a teaching example of "good"
SFC design from [17].
SFCs exhibit a rich control-flow behaviour. At any given
time, each step can be either active or inactive and the
set of all active steps defines the current state of the
system. A step remains active until one of its successor
1 "elevator" controller, for American readers
transition conditions evaluates to "true", thereby passing
control to the step(s) targeted by the links emanating
from that transition. The double horizontal lines
introduce and conclude sections of the diagram which
execute concurrently with each other.
Diagrams: Direct Representations
The term "visual representation" has, at times, been
taken to be synonymous with "diagram". However, consider
a textual representation, for example a propositional
logic sentence such as "p and (not q or r)". This,
if we are to be precise, is also an example of a visual
representation. After all, the symbols in this sentence
are expressed to the reader's visual sense (as ink marks
on a page), not to their senses of touch or hearing, as
would be the case with braille or speech for example.
There is, however, a significant di#erence between certain
of the visual symbols of Figure 1 and those of this
propositional sentence. The di#erence is that certain of
the symbols in Figure 1 exhibit intrinsic properties, and
these properties directly correspond to properties in the
represented domain.
Consider that the reader of an SFC diagram can instantly
distinguish step and transition elements. They
are represented by quite distinct visual tokens (rectan-
gles and lines respectively) which clearly belong to different
categories, in a manner obviously not matched by
the textual tokens of the above propositional logic sen-
tence. Furthermore, the SFC of Figure 1 follows a convention
common to many such graph-based notations
by laying out sequences of steps in a top-down fashion
(leaving aside the issue of loops for the moment). Thus
a reader can also instantly see that the step "Brake" will
be preceded by "Wait" and "Move" steps, as these steps
appear above it in the diagram. These aspects of SFC
diagrams are both examples of the direct representation
of semantic information.
Thus, in general certain diagrammatic notations and relations
may be directly semantically interpreted. This
directness may be exploited by the semantics of diagrams
in a systematic way. Such "systematicity" is
not exclusively the preserve of diagrammatic representa-
tions, but - with their potential for direct interpretation
diagrams have a head start over sentential representations
in the systematicity stakes. However, to understand
what makes diagrams e#ective, we must consider
their interpretation by humans more generally. Studies
have shown that what we describe next as pragmatic aspects
of diagrams, play a significant role in typical uses
of successful diagrammatic languages.
Pragmatics in Diagrams
In linguistic theories of human communication, developed
initially for written text or spoken dialogues, theories
of "pragmatics" seek to explain how conventions
and patterns of language use carry information over and
above the literal truth value of sentences. For example,
in the discourse:
1. (a) The lone ranger jumped on his horse and rode
into the sunset.
(b) The lone ranger rode into the sunset and
jumped on his horse.
(1a)'s implicature is that the jump happened first, followed
by the riding. By contrast, (1b)'s implicature
is that riding preceded jumping. In both (1a) and
(1b), implicatures go beyond the literal truth conditional
meaning. For instance, all that matters for the
truth of a complex sentence of the form P and Q is
that both P and Q be true; the order of mention of
the components is irrelevant. Pragmatics, thus, helps
to bridge the gap between truth conditions and "real"
meaning. This concept applies equally well to the use
of diagrammatic languages in practice. Indeed, there is
a recent history of work which draws parallels between
pragmatic phenomena which occur in natural language,
and for which there are established theories, and phenomena
occurring in diagrammatic languages [9, 18, 20].
Studies of digital electronics engineers using CAD systems
for designing the layout of computer circuits
demonstrated that the most significant di#erence between
novices and experts is in the use of layout to
capture domain information [23]. In such circuit diagrams
the layout of components is not specified as being
semantically significant. Nevertheless, experienced
designers exploit layout to carry important information
by grouping together components which are functionally
related. By contrast, certain diagrams produced by
novices were considered poor because they either failed
to use layout or, in particularly "awful" examples, were
especially confusing through their mis-use of the common
layout conventions adopted by the experienced en-
gineers. The correct use of such conventions is thus seen
as a significant characteristic distinguishing expert from
novice users. These conventions, termed "secondary no-
tations" in [23], are shown in [20] to correspond directly
with the graphical pragmatics of [18].
A straightforward example of the use of such pragmatic
features may be readily observed in the SFC diagram
of
Figure
1. While this diagram is a simplified version
of the SFC from [17], nevertheless we have retained the
layout of that original SFC, and note that it carries important
information concerning the application being
represented. The main body of the SFC of Figure 1
is conceptually partitioned into the three regions illustrated
in the following outline:
F
A
Region N is concerned with normal operation, A is an
alarm-raising component and F performs fault detection
(e.g. action "Checks" monitors the state of the lift
and raises the boolean signal "Fault" whenever a fault
occurs).
More recent studies of the users of various other diagrammatic
languages, notably visual programming lan-
guages, have highlighted similar usage of graphical pragmatics
[22]. A major conclusion of this collection of
studies is that the correct use of pragmatic features,
such as layout in graph-based notations, is a significant
contributory factor in the comprehensibility, and hence
usability, of these representations.
Diagrammatic Reasoning: SFC Example
One of the major tasks that SFC notations intend to
support is the inference (by system designers) of which
sequences of states may the system exhibit. Desirable
such sequences are formulated in terms of system prop-
erties, prominent among which are safety properties.
For instance, appropriate for our lift controller is property
Safe expressed as: "Assuming no faults, the lift
always stops before the next call is attended."
Example 1 The diagram of Figure 1 exhibits property
Safe because: (1) Once the main loop is entered, the
assumption of fault-freeness implies that control is retained
within the loop; and (2) the loop forms a single
path from step "Move" to itself that includes condition
"Stopped".
Crucial to part (2) of this argument is the observation
that paths in the diagram correspond semantically to
temporal orderings of events: if only a single path exists
from any current step A to step B and condition t
appears along the path, the next activation of B must
be preceded by an occurrence of t.
A desirable software engineering goal is to support the
formalisation of informal arguments, such as the above.
We argue that for a formalisation to be e#ective, as with
an e#ective diagrammatic representation, it should accurately
structure those aspects of the represented artifact
which are pertinent to the required reasoning tasks.
For example, essential to the informal argument in the
preceding example is the ability to focus precisely on
the part of the diagram which is responsible for property
Safe. That is, focusing on the loop and excluding
the alarm-raising and fault-checking parts of the dia-
gram. A formalisation which does not readily permit
a similar structuring, will clearly be less e#ective than
one which does.
Consider that there are numerous, less direct ways of
capturing the semantics of SFCs; a common one being to
enumerate all possible states of the SFC in a transition
system [31].
Example 2 A transition system modelling the behaviour
of the lift controller has as states all reachable
sets of steps. Its transitions are possible combinations
(i.e. sets) of conditions. A small part of this transition
system is given below (with step names truncated
to their initials):
{S} Ready
Floor.
Fault
zz#
{Floor. ,Fault}
. {W,CF} {M,C}
level.
Fault
""
{level. ,Fault} #
{M,CF} .
{B, CF}
Stopped
OO
{B, C}
Fault
This resulting transition system is typical of a formalisation
which obscures the structure necessary to the
informal argument of Example 1. This is because paths
in the transition system result from the interweaving
of events belonging to several concurrent components
in the SFC. In contrast to the informal argument of
Example 1, when arguing properties such as Safe on
the model of Example 2 it is generally hard to exclude
behaviour originating in parts of the system which are
otherwise unrelated to the property in question. While
it could be argued that this is hardly a problem for small
SFCs, the size of a transition system grows rapidly with
the number of concurrent components in the SFC.
Supporting Intuitive Reasoning
A significant determiner of what makes a particular representation
e#ective is that it should simplify various
reasoning tasks. Any formalisation of the semantics of
such a representation must therefore ensure that such
reasoning tasks are equally easy in it. One benefit that
certain diagrammatic representations o#er to support
this is the potential to directly capture pertinent aspects
of the represented artifact (whether this be a concrete
artifact or some abstract concept). As we have seen,
this "directness" is typically an intrinsic feature of di-
agrams; determining which aspects of the represented
artifact are "pertinent" requires that we consider pragmatic
as well as syntactic aspects of representations.
For example, the operation of the lift controller in our
example is conceptually partitioned into three modes:
of normal, alarm-raising and fault checking behaviour.
The layout of the SFC diagram of Figure 1 is such that,
for any given step or transition, membership of one these
modes is represented as membership of an identifiable
region of the graph (one of the regions N , A or F above).
Thus, a conceptual aspect of the artifact (membership
of some behavioural mode) is directly captured by a representing
relation in the diagram with matching logical
properties (membership of a spatial region in the plane).
Note that, in this case as with the expert designers of
CAD diagrams and visual programs studied in [23, 22],
it is the pragmatic features of the diagram (layout being
chosen so as to suggest conceptual regions) which
are exploited to carry this information. Indeed, in the
original SFC diagram from [17], of which our example
is a simplification, these regions were strikingly well de-
lineated. We outline next a formal description of SFC
diagrams which reflects both their direct and pragmatic
features, leading to a formal underpinning of intuitive
arguments such as the one for the Safe property illustrated
above.
Let S and T be the sets of all step identifiers and transition
conditions respectively. One way of describing the
structure (and layout) of SFC diagrams is to construct
an algebra of "diagram expressions", in which each expression
denotes a particular way of decomposing a diagram
[10].
To begin with, let the atomic diagrams
and
be denoted s and t respectively (where s # S and t #
T). Also, write # and # respectively for the branching
elements
and
In introducing the operations of the algebra, it is facilitative
to visualise each diagram expression D as an
"abstract" diagram:
defined whenever the number of connections emanating
from the bottom of D equals that of connections entering
juxtaposes D and D # with D
on the left. Two further operations are defined: D,
resulting in
and [D] resulting in
In writing SFC expressions, we shall assume to have
higher precedence than ;
and# to have the least precedence
The main body of our example may now be expressed
as
where:
(level . ; Brake;
A # Fault; Alarm
and 1 denotes a vertical line. Examining the original
SFC, one can now see how the expressions N , A and F
correspond to the regions identified at the end of Section
3. Finally, the expression for the entire controller
is:
Algebraic methods of diagram description, such as the
one just outlined, may be called analytic (a term originally
due to McCarthy) as they emphasise a decompositional
view of diagrams. When carefully designed,
such descriptions are generally simpler (more abstract)
than synthetic ones, e.g. those based on graph gram-
mars, and naturally lend themselves to the analysis of
diagrams from a semantic or logical standpoint.
Almost every non-trivial diagram may be decomposed
in a variety of ways, thus allowing multiple semantic
interpretations and also multiple routes for constructing
reasoning arguments based on the diagram. Some
semantic and logical analysis of software diagrams is
therefore required, not only to ensure consistency of
interpretations, but also to validate many commonly
occurring reasoning arguments. Viewing, as we do,
formalisation as a means to the validation of informal
(or semi-formal) reasoning practices emphasises formal
analysis as a tool of the notation designer, not as an
imposition on the user.
The provision of rigorous evidence of desirable properties
in software systems is a required, but highly costly
activity in many domains; especially those in which
products are subject to regulation or certification (e.g.
safety-critical systems). While the use of formal methods
in supplying such evidence is often recommended or
mandated (e.g. SEMSPLC guidelines [13], MOD guidelines
[32]), their practical application remains undeniably
di#cult. This is largely because most formal methods
rely on intimate knowledge and explicit manipulation
of some underlying, generic model and are typically
less concerned with user-oriented representations. Thus,
given a system expressed as diagram d and a property
traditional approaches typically consist in: (1) obtaining
some behavioural model M(d) of d, such as a
function, a transition system, etc.; (2) formalising p as
a formula #(p) in some suitable logic; and (3) verifying
whether M(d) |= #(p), i.e. whether the model satisfies
the formula.
By contrast, many informal or semi-formal arguments
appeal directly to domain-specific features of a system's
representation to gain remarkable simplicity. Neverthe-
less, the elevation of such arguments to a level admissible
as rigorous evidence requires their formal underpinning
and justification by logical means.
Roughly speaking, our approach attempts to substitute
structural (i.e. algebraic) models of diagrams for behavioural
models in step (1) above. If, for instance,
D(d) is an algebraic expression denoting an (actual)
SFC diagram d, and formula #(p) expresses a property,
the reasoning problem (step (3) above) is reformulated
as D(d) |= #(p). In terms of our example property Safe
and the main body of our lift controller:
LiftBody |= #(Safe) . (1)
Continuing with our illustration, property Safe may be
expressed as a temporal formula [31]. Let, for instance,
A(Stopped
where A(#) is interpreted as "always # 1 B# 2 as "# 1
before "implies". What is important here
is not the precise definitions of A and B in our crude
formalisation. Rather, we aim to illustrate the overall
structure of the formula, #, where # expresses an
assumption about the computation (here "always not
Fault") and # expresses a commitment of the system.
The implicit inferential "short-cuts" in the informal argument
of Example 1 may now be formalised, and thus
justified, by means of inference rules. For instance, one
rule views concurrency as the conjunction of the com-
ponents' respective properties:
Rule 1: If D |= # and D # , then
[D#
Another rule eliminates part of a diagram which is inaccessible
under given assumptions:
Rule 2: If D |= A(# and t #, then
Starting with goal (1), equivalently written as
and applying Rule 1 yields sub-goals N ; A |= #(Safe)
and F |= true, the second of which is trivial. By Rule
2, applied to the definitions of A and #(Safe), we now
discard the alarm-raising component to concentrate on
the part precisely responsible for our property: N |=
#(Safe).
The soundness of rules such as those above must eventually
be established wrt. some behavioural model. This
obligation, however, lies not with the user, but once-
and-for-all with the developers of the diagrammatic notation
6 DISTURBING DIAGRAM FEATURES
A further benefit of our approach relates to the over-all
design of diagrammatic languages. The rigorous examination
of how the concepts of "natural" and "intu-
itive" relate to diagrams, paves the way to the principled
identification of specific features which contravene
these concepts. Thus far, we have concentrated on the
core features of SFC diagrams. In pursuit of our goal
to demonstrate formal analysis as a tool in the design
of software engineering diagrams, we shall briefly introduce
an extra feature, subject it to semantic analysis
and examine the insights resulting from this analysis.
The semantic model associated with SFCs is that of
(labelled) Petri nets, a concept widely used and well
understood in the domain, and the basic SFC notation
is highly suggestive of this association. Unfortunately,
the definition of SFCs in [14] abounds with extensions
which forcefully violate this analogy. One such exten-
sion, called action qualification, permits certain actions
to be "set active" by some step and continue to be invoked
following the step's deactivation. Such actions
will remain active either indefinitely or until they are
explicitly "reset" by a step elsewhere in the diagram.
This mechanism of action-step association is visualised
by attaching an oblong box to a step as follows:
Step
where Q is a "qualifier". Of the many qualifiers permit-
ted, here we look at "N" (usually omitted and standing
for "normal") and "S", "R" (standing for "set" and "re-
set"). An example of an SFC diagram making use of this
feature is given in Figure 2.
To describe both SFC diagrams and nets in a uniform
way we shall, following [8], think of each as a collection
of "objects" and relations among them. (Notation:
Given relation R we write (x, y, z) # R if objects x, y
and z are related via R.)
Under this view, one abstraction of the SFC diagram
in Fig. 2 has objects s 1 , . , s 5 , A, B, C,
responding to the steps, actions and transitions). The
relations in this abstraction capture links and action-
step associations. Relation L is such that (x, y) # L
A
R
A
p3
p6
A
Figure
2: SFC diagram and its corresponding net.
i# a directed link exists from x to y in the diagram
(e.g.
For each type (S, R or N) of qualifier there is a binary
relation Q such that (x, a) # Q i# x is a step associated
with a via Q. So, for example,
us call this abstraction D
as it captures all that we regard as essential about the
diagram.
The labelled-net semantics of our example diagram is
given (also diagrammatically!) in Fig. 2. Each place in
the net is labelled with zero or more actions and the
abstraction P associated with the net has objects for
the places, transitions and labels. Its relations are F ,
corresponding to the "flow" of the net (i.e. (x, y) # F
i# a single directed link exists from object x to object
place p is labelled with
action a.
We now proceed to evaluate the degree of correspondence
between the abstractions modelling the diagram
and its semantics. A mapping m from abstraction W to
abstraction W # maps the objects of W to those of W #
and also the relations in W to relations in W # . Such
a mapping is called a homomorphism if for every relation
R which holds between objects x, y, . in W , the
corresponding relation m(R) in W # holds between the
corresponding objects m(x), m(y), .
Consider now a situation where an abstraction W adequately
captures everything which is deemed relevant in
a diagram, whereas W # captures the relevant aspects of
the artifact represented by the diagram. What does the
existence of a homomorphism h from W # to W signify?
In part, it tells us that every relation in the artifact
which we regard as important has a corresponding relation
in the diagram. Moreover, it tells us that every
(first-order) logical statement about the relations in W #
translates to a statement about W which holds if the
original does in W # .
Returning to our running example, let us derive from D
a slightly higher abstraction, having the same objects
and relation L as D but only one additional relation
This abstraction, which we call U , provides
partial information about a user's interpretation of the
SFC diagram. In particular, G contains exactly those
action-step associations which are explicitly guaranteed
in the diagram, and thus hold in all semantic interpretations
which respect the meaning of qualifiers. Thus,
for example, (s 4 , G as this association depends on
the history of the computation leading to s 4 .
One now observes that there can be no homomorphism
from P to U , as every candidate should map both p 4
and p 5 to s 4 and (p 5 , G. We
are forced to conclude that an important semantic re-
lation, that of which actions are invoked in each mode
of the system, is not systematically visualised. This introduces
complications in reasoning and suggests that
the introduction of the "S" and "R" qualifiers poorly
integrates with the core notation.
Using an elementary semantic analysis, we have thus
shown how some questionably convenient features of
IEC SFCs can introduce a seriously dangerous mismatch
between a user's intuitive interpretation of the graphical
representation and its actual semantics. In the presence
of such features, knowledge of the global structure of
the SFC may be required before overall behaviour can
be inferred from the behaviours of the currently active
steps. Such knowledge may be extremely hard to establish
accurately about large diagrams.
7 CONCLUSIONS AND FUTURE WORK
This paper has presented an overview of an ongoing re-search
theme of the authors. The use of diagrams and
diagrammatic languages has become increasingly prevalent
in software engineering, with claims of their natural
"intuitiveness" being typical. We have summarised the
primary findings of a deeper and more cognitively informed
examination of these proclaimed benefits. These
findings indicate that a more considered approach than
is common is required, if the formalisation of such diagrammatic
languages is to successfully provide equally
natural and intuitive support for their practical use.
The benefits of such a more considered approach have
been illustrated through examination of the industry
standard language of SFCs (Sequential Function
Charts) for embedded controllers. Consideration of
which features of this language are natural and intuitive
guided the design of a simple formalisation which,
as we have shown, supports the validation of typical informal
arguments concerning desirable system proper-
ties. Furthermore, we have argued that such validation,
and its formal details, is best seen as the responsibility
of the designer of the language, rather than of the user
(engineer) who constructs the informal argument.
At present we are exploring and extending our approach
in the following areas:
. the development of a more concrete framework for
exploring the connection between semantics of diagrammatic
languages and their form: both the
natural encoding of semantic properties by features
of the language, and the implications that features
carry of intuitive semantic meaning;
. the broader application of our approach: to other
PLC languages, to other domain specific languages
and - more generally - to specification and modelling
languages;
. the exploration of tool support: both specifically
for the design of diagrams in specific languages
(such as SFCs) and more generally for the design
and exploration of novel diagrammatic languages.
A further benefit of our approach is that it paves the
way towards a principled exploration of specific features
of diagrammatic languages, permitting ready identification
of potentially dangerous, misleading, or otherwise
counter-intuitive features. As diagrammatic languages
become more commonplace in software engineering -
whether for design, modelling, or programming - the
need for a sound basis for the design of such languages
becomes ever more pressing. A basis which will serve
to guide the design of diagrammatic languages which
benefit from rigorous formality, without sacrificing their
intrinsic intuitiveness and appeal. The work reviewed
in this paper provides a foundation upon which such a
basis may be developed.
--R
Formalizing style to understand descriptions of software archi- tecture
A formal basis for architectural connection.
logic.
VPLs and novice program comprehen- sion: How do di#erent languages compare? In 15th IEEE Symposium on Visual Languages (VL'99)
Cognitive dimensions of notations.
Usability analysis of visual programming environments: a 'cognitive di- mensions' framework.
On the isomorphism
Theories of diagrammatic reasoning: distinguishing component problems.
Formalising pragmatic features of graph-based notations
Towards a model theory of Venn diagrams.
Institute of Electrical Engineers.
IEC 1131-3: Programmable Controllers - Part3: Programming Languages
Why a diagram is (sometimes) worth ten thousand words.
Programming Industrial Control Systems Using IEC 1131-3
Avoiding unwanted conversational implicature in text and graphics.
Visual language the- ory: Towards a human-computer interaction per- spective
Grice for graphics: pragmatic implicature in network diagrams.
OMG ad/99-06-08 (Part
Why looking isn't always seeing: Readership skills and graphical programming.
Requirements of graphical notations for professional users: electronics CAD systems as a case study.
Formulations and formalisms in software architecture.
Operational constraints in diagrammatic reasoning.
Derivative meaning in graphical representations.
A cognitive theory of graphical and linguistic reasoning: logic and im- plementation
Image and language in human reasoning: a syllogistic illustration.
UK Ministry of Defence.
Representations in distributed cognitive tasks.
--TR
Cognitive dimensions of notations
Modal and temporal logics
Why looking isn''t always seeing
Formalizing style to understand descriptions of software architecture
A formal basis for architectural connection
Operational constraints in diagrammatic reasoning
Situation-theoretic account of valid reasoning with Venn diagrams
Towards a model theory of Venn diagrams
Visual language theory
On the isomorphism, or lack of it, of representations
Diagrammatic Reasoning
Theories of Diagrammatic Reasoning
Derivative Meaning in Graphical Representations
Formalizing Pragmatic Features of Graph-Based Notations
VPLs and Novice Program Comprehension
--CTR
Helen C. Purchase , Ray Welland , Matthew McGill , Linda Colpoys, Comprehension of diagram syntax: an empirical study of entity relationship notations, International Journal of Human-Computer Studies, v.61 n.2, p.187-203, August 2004
Helen C. Purchase , Linda Colpoys , Matthew McGill , David Carrington , Carol Britton, UML class diagram syntax: an empirical study of comprehension, Proceedings of the 2001 Asia-Pacific symposium on Information visualisation, p.113-120, December 01, 2001, Sydney, Australia | software diagrams;programmable logic controllers;diagrammatic languages |
337455 | An approach to architectural analysis of product lines. | This paper addresses the issue of how to perform architectural analysis on an existing product line architecture. The con tribution of the paper is to identify and demonstrate a repeatable product line architecture analysis process. The approach defines a good product line architecture in terms of those quality attributes required by the particular product line under development. It then analyzes the architecture against these criteria by both manual and tool-supported methods. The phased approach described in this paper provides a structured analysis of an existing product line architecture using (1) formal specification of the high-level architecture, (2) manual analysis of scenarios to exercise the architecture's support for required variabilities, and (3) model checking of critical behaviors at the architectural level that are required for all systems in the product line. Results of an application to a software product line of spaceborne telescopes are used to explain and evaluate the approach. | INTRODUCTION
A software product line is a collection of systems that
share a managed set of properties that are derived from
a common set of software assets [4]. A product line
approach to software development is attractive to most
organizations due to the focus on reuse of both intellec-
This research was performed while this author was a visiting
researcher at the Jet Propulsion Laboratory.
y This author supported in part by a NASA/ASEE Summer
Faculty Fellowship.
z Contact Author.
x Mailing address: Dept. of Computer Science, Iowa State Uni-
versity, 226 Atanaso Hall, Ames, IA 50011-1041.
tual eort and existing tangible artifacts. The systems
or \derivatives" in a software product line (e.g., software
based on the product line) usually share a common ar-
chitecture. For a new product line, many alternative
architectures are designed according to the requirement
specications and one is selected as the \baseline" or
\core" for future systems. For a product line that leverages
existing systems, an architecture may already be in
place with organizational commitment to its continued
use.
This paper addresses the issue of how to perform architectural
analysis on an existing product line archi-
tecture. The contribution of the paper is to identify
and demonstrate a repeatable product line architecture
analysis process. Throughout the paper, application to
a software product line of spaceborne telescopes is used
to explain and evaluate the approach. The approach
denes a \good" product line architecture in terms of
those quality attributes required by the particular product
line under development. It then analyzes the architecture
against these criteria by both manual and tool-supported
methods.
This paper demonstrates the analytical value of specifying
an existing architecture with an Architectural Description
Language (ADL), both in terms of identifying
architectural mismatches with the product line and in
terms of providing a baseline for subsequent automated
analyses. The ADL model is then used to manually exercise
the architecture in order to measure how each of
a set of selected scenarios that capture the required attributes
(e.g., modiability, fault tolerance) impacts the
architecture. We found that this technique was particularly
eective at verifying whether or not the architecture
supported planned variabilities within the product
line.
Further verication of the architecture involves automated
tool support to analyze key, common behaviors.
We were particularly interested in the adequacy of the
fault-tolerant behavior of a critical data interface common
to all systems. Model checking of the targeted
behaviors allows demonstration of the consequences of
some architectural decisions for the product line.
The phased approach described in this paper provides a
structured analysis of an existing product line architecture
using: (1) architectural recovery and specication,
(2) manual analysis of scenarios to exercise architectural
support for required variabilities, and (3) model checking
of critical behaviors at the architectural level that
are required for all systems in the product line.
The rest of the paper is organized as follows. Section 2
provides background relating to software architecture,
product lines, and the interferometer application. Section
3 describes the three-step approach outlined above
in greater detail. Section 4 presents and discusses the
results from the manual and tool-supported analyses.
Section 5 brie
y describes related work. Section 6 offers
concluding remarks and indicates some directions
for future research.
This section describes background material in the areas
of software architectures, software product lines, and
interferometry.
2.1 Software Architectures
A software architecture describes the overall organization
of a software system in terms of its constituent ele-
ments, including computational units and their interrelationships
[21]. In general, an architecture is dened
as a conguration of components and connectors. A
component is an encapsulation of a computational unit
and has an interface that species the capabilities that
the component can provide. Connectors, on the other
hand, encapsulate the ways that components interact.
A conguration of components interconnected with connectors
determines the topology of the architecture and
provides both a structural and semantic view of a sys-
tem, where the semantics are provided by the individual
specications of the components and connectors.
Another important concept in the area of software architectures
is the concept of an architectural style. An
architectural style denes patterns and semantic constraints
on a conguration of components and connec-
tors. As such, a style can dene a set or family of systems
that share common architectural semantics [18].
For instance, a pipe and lter style refers to a pipelined
set of components whereas a layered style refers to a
set of components that communicate via hierarchies of
interfaces. The distinction between architectural style
and architecture is an important concept throughout
the work described here. As one would expect, all the
systems in our example product line share a base architectural
style and a set of shared software components
that are organized and communicate in certain
prescribed manners. However, there are architectural
variations among the systems regarding the number of
components and connectors, with some systems replicating
portions of the baseline reference architecture in
their individual architectures.
2.2 Product Lines
Bass, Clements, and Kazman dene a software product
line as \a collection of systems sharing a managed set of
features constructed from a common set of core software
assets" [4]. These assets typically include a base architecture
and a set of shared software components. The
software architecture for the product line displays the
commonality that the systems share and provides the
mechanisms for variability among the products. The
systems in the product line are referred to as members
or derivatives of the baseline architecture or architectural
style.
2.3 Interferometers
The product line of interest in this work is a set of interferometer
projects under development by NASA's Jet
Propulsion Laboratory. An interferometer, in this con-
text, is a collection of telescopes that act together as a
single, very powerful instrument. Interferometers will
be used to explore the origins of stars and galaxies and
to search for Earth-like planets around distant stars.
An interferometer combines the starlight it collects from
telescopes in such a way that the light \interferes" or
interacts to increase the intensity and increase the precision
of the observation.
Three spaceborne interferometers are either under development
or planned for launch in the next eleven
years, with additional formation-
ying interferometers
envisioned for subsequent years [13, 19]. Two ground-based
interferometers in the product line are currently
operational, with at least two more planned.
Among the components shared by the interferometer
systems and discussed in this paper are the Delay Line,
the Fringe Tracker, and the Internal Metrology. The
Delay Line component compensates for the dierence
in time between the arrival of starlight at the separate
mirrors. The Fringe Tracker component provides constant
feedback to the Delay Line regarding needed adjustments
to maintain peak intensity of the fringe (pat-
terns of light and dark bands produced by interference
of the light). The Internal Metrology component provides
input to the Delay Line regarding small changes in
distances among parts of the interferometer that must
be included in its calculations.
In previous work, we analyzed commonalities and variabilities
of the JPL interferometry software project [17].
The software in these interferometers has a high degree
of commonality with a managed set of shared features
built from core software components [3]. A group of
developers at JPL with a strong background in interferometer
software provides reusable, generic software
components to the interferometer projects.
Extensive documentation of the requirements and design
for these software components, as well as C code
for the component prototypes, were available for our
analyses. In addition, we used whatever project-unique
documentation was available. Predictably, more documentation
exists for projects farther along in their de-
velopment. System descriptions are available for all the
missions; software requirements and design documents
are still high-level and informal for later missions; and
code is not yet available for any of the spaceborne interferometers
In this section we describe the approach that was used
to analyze the interferometer software product line. Section
3.1 summarizes the overall process used during the
project and introduces the architectural recovery, dis-
covery, and specication of the existing product line;
Section 3.2 describes the manual analysis process used
to measure quality attributes related to product lines;
and Section 3.3 describes the behavioral analysis performed
using automated tool support.
3.1 Process
A software architecture is one key required element that
should be present in order to analyze software for product
line \tness" since it is the architecture, above any
other artifact, that is being reused. One of the properties
of this particular product line is that although
an architecturally-based product line approach was not
used in the construction of the software, the artifacts
(both conceptual and physical) were being used in a
manner indicative of a product line approach. As such,
several software products had been developed or were
in the process of being developed based on the core architecture
For the interferometer software, we performed three
architecture-centered steps: 1) architecture recovery,
discovery, and specication, 2) manual architectural
analysis, and assisted architectural analysis.
The rst step, architecture recovery, discovery, and
specication, was used in order to facilitate two goals:
1) to familiarize the analysts with the problem domain
and implemented solution, and 2) to support construction
of a software architectural representation that was
consistent with current standards and vocabulary. For
this step, documentation, source code, and developer
communication was used to assist in the construction
of a reasonable specication of the software architec-
ture. The resulting architecture specication, shown
graphically in Figure 1, formed the basis for all subsequent
analyses, both manual and automated. In the
diagram, hardware components are shown as shaded
and round rectangles while the software components are
shown as sharp rectangles. The connectors, represented
by lines between components, depict the relationships
between components in the architecture. This particular
diagram represents the software that exists within
an \arm" of an interferometer, where a standard interferometer
has two arms.
The software architecture recovered in the rst step
formed the baseline or core architecture for the interferometer
product line. The assumption in this step
(later conrmed by the analysis described below) was
that, although changes in software code are frequent,
signicant modications to the software architecture are
infrequent. As such, a reasonable, initial view of the
software architecture can be derived from existing design
documents and later modied as new information
is recovered.
To aid in the validation of the models constructed in the
rst step, we consulted with the project engineers to determine
the accuracy of the architecture as documented
in comparison with how the project engineers viewed
the architecture. This information was instrumental in
constructing a more accurate view of the interferometer
architecture.
To further validate the accuracy of the core architecture
and its scalability to the existing and planned products
in the product line, we compared the core to the individual
product line derivatives. To facilitate the com-
parison, we developed a table (excerpted in Table 1) as
a medium for communication with several developers.
In the table, each row represents a dierent component
that could be potentially present in an interferometer
system. The columns represent the dierent derivatives
that are currently either being developed or are planned
for deployment over the next several years. This table
served as a simple way to represent features of the architecture
that are common in behavior to each potential
derivative, but can potentially vary in multiplicity
based on the number of potential starlight collectors or
\arms". For each derivative, we consulted with developers
to verify that the number of components listed in
the table was consistent with individual mission plans.
Components Core D1 D2 D3
Baselines
Arms
Delay Line 2 2 6-8 4
Fringe Tracker 2 1 3-4 2
Instrument
User Interface
Table
1: Comparison Matrix
The next phase of the approach was to perform a number
of analyses in order to help determine whether the
architecture was amenable to a product line development
approach. The primary goal was to determine
Figure
1: Interferometer Software Architecture
if certain, desirable quality attributes present in most
product line architectures were also present in the interferometer
architecture. In addition, we were interested
in performing behavioral analysis in order to study how
behavioral interactions in the core architecture might
potentially impact derivatives.
The remainder of this section is divided into Manual
Architectural Analysis and Analysis Using Automated
Support Tools. One of the interesting aspects of this
bifurcation of the analysis along manual and automated
analysis lines is that the quality attributes that fall into
the class of variabilities seem to be supported only by
manual analysis techniques whereas the commonalities
seem to be supported in some manner by automated
tools. As the work described here is only a single point
of data, we do not attempt to explain the observation,
although we do nd it interesting and recognize the need
for further investigation along these lines.
3.2 Manual Architectural Analysis
Bass, Clements, and Kazman divide quality attributes
into those that can be discerned by observing the system
at runtime and those that cannot [4]. Of the ones that
cannot be observed at runtime, modiability is the key
property required by the interferometer product line.
Modiability, according to Bass et al., \may be the quality
attribute most closely aligned to the architecture of
a system," and, as such, is a good way to evaluate the
architecture. Bass et al., identify four categories of mod-
iability: Extensibility or changing capabilities, Deleting
capabilities, Portability (adapting to new operating
environments), and Restructuring.
To evaluate the modiability of the interferometry product
line architecture, we extracted examples of each of
the four categories of modiability from the requirements
specications of four systems currently planned
or under development. As such, we are using a product-oriented
view of a product line, which is consistent with
other product line approaches such as the PuLSE technique
[6]. We then manually analyzed the eect of each
change on the specied architecture. This interferometer
system was chosen because its requirements were
well documented and individual product line derivatives
had requirements that facilitated the study of the mod-
iability of the baseline architecture.
The approach used is very similar to SAAM [14], a
scenario-based method for analyzing architectures. A
scenario is a description of an expected use of a spe-
cic product line. SAAM also tests modiability, e.g.,
by proposing specic changes to be made to the sys-
tem. The advantage of the scenario-based approach is
that it moves the discussion from a rather amorphous,
high-level of generality (\modiability") to a concrete,
context-based level of detail particular to the product
line (\adds pathlength feedforward capability").
The interferometer product line has signicant requirements
that fall under each of the four categories of mod-
iability as follows.
Extensibility. Potential extensibility variations include
new algorithms (e.g., a dierent fringe-search algo-
rithm) and added features (e.g., pathlength feedforward,
internal metrology).
Deletions. Deletions involve changes required to support
the incremental capabilities of the various testbeds
and prototypes. For example, testbeds use pseudostar
input rather than actual starlight, whereas
the science interferometers use direct starlight as input.
Attribute Scenario Type Example Scenario Eect on Architecture
Extensibility Change algorithm Algorithm for fringe search changed No change required
Extensibility Add feature Pathlength feedforward capability No style change; additional connectors
Extensibility Add feature Internal metrology added No style change; additional components
and connectors
Deletion Delete input Use pseudostar rather than actual No change required
Portability Change HCI device Shift handheld paddle to remote device Connector unchanged
Portability Change sensor Starlight detector hardware changed Interface intact; component
implementation changes
Portability Add input units More starlight collectors No style change; \duplicate"
existing pieces; see discussion
Portability Add processors Distribute targeting computation No style change;
change within components
Restructuring Optimize for reuse Proposed switch to CORBA Might change style and connectors
Table
2: Analyzing the Architecture's Modiability via Scenarios
Portability. Portability changes are widespread, since
dierent interferometers in the product line will have
dierent numbers of starlight collectors, mirrors, tele-
scopes, etc. In addition, dierent systems will use different
starlight detector hardware and dierent operator
interfaces (e.g., a handheld paddle for the testbeds,
remote commandability for the
ight units). The interferometer
software will run on multiple processors, with
the number of processors being a variability among the
systems.
Restructuring. Restructuring changes that are not included
in the other categories are limited. A proposed
change to optimize for reuse is the only scenario used in
the architectural evaluation.
As shown in Table 2, nine representative changes were
selected to evaluate the modiability of the architecture:
three extensibility changes, one deletion, four portability
changes, and one restructuring. All these changes
are variabilities in the product line specication, i.e.,
not common to all the interferometers. The approach
was to use these representative scenarios to exercise and
evaluate the baseline architecture. A discussion of the
results of the application to the baseline interferometer
architecture and, more generally, of the advantages
and disadvantages of this approach can be found in Section
4.
3.3 Analysis using Automated Support Tools
One of the goals of this project was to determine the
extent to which automated support tools could be used
to aid in the analysis of a product line software archi-
tecture. Specically, it was our intent to identify tools
that could be adopted with little overhead, while still
satisfying the objective of formally analyzing the architectural
behavior. This meant that the selected tools
should have a reasonable level of support and documentation
The following tasks were identied as the critical path
for achieving our automated analysis objectives: (1) Architecture
specication in an ADL, (2) Formal speci-
cation of behavior, and (3) Analysis of behavior. The
approach used in the selection of notations and tools is
described here. The results of the tool-supported analysis
are described and discussed in Section 4.
The ACME [9] ADL and ACMEStudio [1] support tool
were chosen for the specication of the architecture.
ACME is an architecture description language that has
been used for high-level architectural specication and
interchange [9]. ACME contains constructs for embedding
specications written in a wide variety of existing
ADLs, making it extensible to both existing and future
specication languages. ACME is supported by an architectural
specication tool, ACMEStudio, that supports
graphical construction and manipulation of software
architectures. Analysis of the design documents
yielded the software architecture depicted in Figure 1.
In addition to recovering and specifying the high-level
view of the interferometer architecture, behaviors of
component interactions were derived from existing design
documentation. Specically, we used information
found in design documents to help construct a formal
specication of component interactions in the interferometer
software. The Wright ADL was used for the
formal specication of behavior. Wright [2] is an ADL
based on the CSP specication language [11]. The primary
focus of the Wright ADL is to facilitate the spec-
ication of connector, role, and port semantics. In addition
to being based on the well-established CSP se-
mantics, existing Wright tools support the ACME ADL,
thus providing a clean interface with the existing ACME
specication.
The nal step involved using the formal specications
to analyze behavior of various aspects of certain interactions
between components in the architecture. To
increase condence in the validity of the formal anal-
ysis, source code from the interferometer components
planned for reuse was informally reverse engineered to
determine whether properties observed in the formal
specication were present in the implementation. The
Model Checker was used to further analyze behaviors
of interest. Spin [12] is a symbolic model checker
that has been used for verifying the behavior of a wide
variety of hardware and software applications. Promela,
the input specication language for Spin, is based on Di-
jkstra's guarded command language as well as CSP.
The primary reason for choosing each of the notations
and tools listed above was a pragmatic one. The notations
are related either via direct tool interchange
support (as is the case between ACME and Wright) or
by some semantic foundation (e.g., CSP foundation for
Wright and Promela). As such, the ACME framework
(including Wright specications) could be used for specifying
the interferometer architecture, and verication
using Spin could follow naturally with a small amount
of translation of the embedded Wright into Promela.
In this section we describe the results of applying the approach
described in Section 3. Specically, Section 4.1
discusses the issues that were encountered during the recovery
and specication of the interferometer architec-
ture. Sections 4.2 and 4.3 describe our eorts to manually
and semi-automatically analyze the architecture,
respectively.
4.1 Architecture Specication
As shown in Figure 2, the original documentation for
the interferometry software depicts the architecture using
a layered style. However, during the analysis and
subsequent specication of the architecture, it was discovered
that the architecture, as documented, exhibited
\layer bridging" properties whereby non-adjacent layers
in the architecture communicated, thus \bridging" or by
passing intermediate layers. In addition, sibling components
located in a layer were found to communicate,
contrary to the layered style. Consequently, the high-level
interferometer architecture was re-specied in a
style that was consistent with the services and behaviors
described in lower-level documentation. The resulting
architecture, shown in Figure 1, more accurately spec-
ied the architecture as a heterogeneous architecture
with a collection of communicating processes as well as
a constrained pipe and lter interaction between the
Instrument CDS and all of the other remaining components
4.2 Manual Analysis Results
The baseline architecture shows the commonality that
exists among the members of the product line. Each
member of the product line uses this architecture or an
adaptation of it. Thus, nothing in the architecture can
constrain the anticipated variabilities among the members
As mentioned earlier, one of the key quality attributes
for the interferometer product line is modiability. It
Gizmo Prototypes
Gizmo Design Pattern
Core Services
Configuration
Controller
Modulation
Framework
Command
Framework
Command &
Telemetry
Framework
Engine
Framework
Framework
Gizmo
Inter-processor
Communication
Periodic Task Scheduler Hardware Framework
Pointer
Angle
Instrument
CDS
Delay Line Fringe Tracker Star Tracker
Figure
2: Original Core Architecture
was with the goal of exercising the product line architecture
that we considered the eect on the architecture
of each of nine representative modiability scenarios, all
drawn from the documentation.
Eect on architecture of scenarios
Table
2 summarizes the results of our manual analysis
of the product line architecture for modiability via the
nine scenarios described in Section 3.2. Column 1 indicates
to which of the four categories of modiability each
scenario belongs (Extensibility, Deletion, Portability, or
Restructuring). Column 2 is a high-level description of
the scenario (e.g., \Change algorithm", \Add feature",
\Change sensor", etc. Column 3 brie
y describes the
particular scenario. Column 4 indicates the eect of
that modiability scenario on the baseline architecture.
Of the nine scenarios, four involved no change to the
baseline architecture. These scenarios were: change of
algorithm, deletion of input, change of human-computer
interface device, and change of sensor device. Two
other scenarios, related to extensibility, require additional
connectors and, in one case, an additional component
not in the original architecture. However, these extensions
are relatively straight-forward and their scope
is easy to anticipate.
The other three scenarios require signicant changes to
the product line architecture, but the changes are not
visible at the level of the specied architecture. In one
case (add input units), implementation of the scenario
can involve adding \arms" (i.e., additional axes) to the
interferometer. This has no eect on the more detailed
core architecture (which represents a single axis), but
requires duplication/replication of connectors and components
on the baseline architecture, a signicant architectural
consequence. The scenario that distributes
the targeting computation over more processors can be
accommodated without change to the baseline architec-
ture. At the level of the model, there was no commitment
to implementation details such as number of
processors. The sole restructuring scenario, a possible
switch to CORBA, might change both the style and the
implementation of the connectors, and would require
further investigation.
Discussion
Locality of change. Most modiability scenarios
demonstrated good locality of change for the specied
architecture (i.e., involved changes that could be readily
scoped). The existence of an architectural specication
assisted in this eort. Most scenarios do not aect the
services required of other components.
Units of reuse. The units of reuse in the architecture
tended to be small. For example, a Delay Line is a
unit, but a Delay Line-Fringe Tracker-Star Tracker is
not. All Delay Lines have a high degree of common-
ality, and the interfaces between a single Delay Line
and a single Fringe Tracker are similar for all members
(the \portability layer"), but the number of Delay
Line-Fringe Tracker interfaces varies greatly among the
product line members. The architectural style was not
changed by the scenarios, but the number of connections
and, to a lesser degree, components, was changed.
There are many dierent cross-strappings possible and
a large amount of reconguration involved in meeting
the real-time constraints on the various missions. Having
small units of reuse may complicate verication and
integration of individual members (e.g., with regard to
contention, race conditions, starvation, etc.
Role of redundancy. Several of the scenarios involved
adding multiple, identical components or connectors.
However, these copies are not redundant, in the sense of
adding robustness, since they are all needed to achieve
the required performance. For example, if starlight
collectors are added, it is to increase the amount of
starlight that the interferometer can process in order
to meet requirements for detecting dim targets. Like-
wise, if processors are added, it is to meet requirements
for increasing the resolution capability of an interfer-
ometer. In this architecture, redundancy does not add
robustness for the most part; there are not spare units
or alternate data paths.
Performance. One of the unusual aspects of this application
is that the range and scope of the variabilities
tend to be non-negotiable. This is due to the very
tight performance and accuracy requirements on the interferometry
missions. For example, an upcoming in-
terferometer, the Space Interferometry Mission (SIM),
requires precision at the level of picometer metrology
and microarcsecond astrometry. To achieve this level
of precision, signicant real-time constraints exist with
limited
exibility to accommodate reuse concerns. Performance
requirements on each mission also drive the
choice of hardware, algorithms, and added capabilities.
The consequence for reuse is that in trade-os of modi-
ability vs. performance, performance wins.
Architectural style. Despite the range of variations
that aect the architecture (e.g., varying the number of
ports on a component, varying the number of instances
of a component), the interferometry project is committed
to keeping the architectural style stable. Most im-
portantly, this demonstrates itself in their maintaining
the commonality of the interfaces. The number of interfaces
is not constant among product line members,
but the interfaces themselves are relatively stable. Recognizing
the long timeline over which the product line
will extend (proposed launches from to 2020) and
the primacy of performance (with continuous improvement
of hardware and algorithms), the project has done
a good job of designing for evolvability.
Repeatable process. The manual analysis of the architecture
is a repeatable process that can be applied to
product lines. The process is as follows:
1. anticipated changes from available documentation
and project information. These anticipated
changes form product line variabilities that
the baseline architecture must accommodate.
2. Categorize the anticipated changes into modia-
bility categories (extensibility, deletion, portability,
restructuring).
3. Select and develop scenarios for each category. The
choice of scenarios is made to broadly challenge the
goodness of the architecture with regard to the four
modiability categories.
4. Evaluate the eect of each modiability scenario on
the baseline architecture. This gives a measure of
the goodness of the architecture with respect to the
anticipated variabilities for this product line.
4.3 Analysis Using Automated Support Tools
While the manual analysis addressed issues related directly
to the use of the interferometer architecture as
a product line, the automated analysis was primarily
of use for analyzing behavior viewed as common across
product line members. As such, any behavioral properties
(both positive and negative) discovered at the architectural
level were likely to be common to all members
of the product line.
Verication
A key element of the interferometer architecture was
the use of the \Target Buer" connector. This connec-
tor, both in the design and in the implementation, is a
non-locking buer used to communicate star targets to
the Delay Line component by several other components.
The Target Buer connector was viewed as a possible
concern, especially in light of the non-locking feature. It
was determined that behavior involving this connector
should be formally specied in order to study its impact
on the system.
There are several components that are either directly
or indirectly impacted by the non-locking nature of the
Target Buer connector: Target Sources, a Command
Controller, and a Target Generator component. The
Target Generator uses the values written to the Target
Buer by various Target Sources to compute a target
position for the interferometer. The Command Controller
provides control for the computation by enabling
or disabling the Target Sources. Target Sources write a
timestamped value to the Target Buer, with the timestamp
determining a time that the target value becomes
valid.
The Target Generator uses the following four-step sequence
for calculating the target position:
1. Promote waiting targets to active status if the current
time is greater than or equal to the timestamp
2. Read new targets from enabled target sources
3. Pend (assign to wait status) or activate new targets
based on timestamps
4. Compute the total target
The Wright specication of the interaction between the
Target Generator and the potential sources of data that
are written to the Target Buer is shown in Figure 3.
The Source specication models the fact that a source
internally decides whether or not to write a new value to
the Target Buer. Finally, the Target Generator specication
models the target-position algorithm described
above.
From the Wright specication, we constructed a
Promela specication, portions of which are found in
Figures
4 and 5, with the intention of determining
whether or not the following situations could occur.
Data From Disabled Sources. Is there a potential
for calculating the target position by using data from
sources that are currently disabled?
Best Data from Enabled Sources. Is there a potential
to calculate a target position by using data that is less
current than data currently in the target buer?
In the rst case, we were interested in determining
whether or not it was possible to generate a target position
by using data from inactive sources. In essence, a
target position input can be read by the Target Gener-
ator, pended due to the timestamp (e.g., the timestamp
indicates that the target value is not to be used until
some time in the future), and subsequently promoted
into use when the timestamp matches (or precedes) the
current time. The potential inconsistency occurs during
the time that the target is pended and is caused by the
fact that a source can be disabled during this waiting
period.
Style TargetComputation
Connector TargetBuffer
Role
Role
Reader.readtarget!x -> Glue [] Tick
Component Source
disable -> CDSCommand |~| Tick
Computation (CDSCommand.enable -> Generate) []
(CDSCommand.disable -> Computation) [] Tick
where { Generate = DLTarget.write!y -> Generate []
Generate [] Tick }
Component TargetGenerator
Input.read_target?x ->
_pend_or_activate ->
_compute -> Computation [] Tick )
Style
Configuration TargetComputationInstance
Instances
Attachments
src1.DLTarget as tb1.Writer
dl.Input as tb1.Reader
End Configuration
Figure
3: Subset of the Wright Specication
proctype source_1 (chan cds){
chan cmd;
chan ts = [1] of { int };
chan { int };
int active_or_inactive;
cds?cmd;
do :: (msgs_generated < max_msgs) &&
(active_or_inactive == true) ->
if :: run message(msg);
run timestamp(ts);
od
Figure
4: Promela Specication of Target Source
The second case involves the following situation. As be-
fore, a target from a source is read, potentially pended,
and eventually promoted. Because of the sequencing of
events, a new target value from the source can over-write
the recently promoted target and, based on the
timestamp, be valid for immediate use.
Using the Spin model checker, it was veried that
these situations do in fact exist. In order to determine
whether these cases were also present in the code, we
examined source les and were able to verify that the
proctype target_generator (chan valid){
int sum;
int v;
do :: (msgs_generated < max_msgs) ->
/* "activation/promotion" of
pended targets achieved
by maintaining previous
value of s1 or s2 */
/* read new targets from active target sources */
if :: (v ==
(v ==
/* check if pended or not and compute target*/
if :: (v >
if :: ((s1_ap <= now) && (s2_ap <= now)) ->
reset sum */
(msgs_generated >= max_msgs) -> break;
Figure
5: Promela Specication of Target Generator
situations, as documented and as specied with Wright,
did in fact exist in an early, pre-
ight version of the
source code.
In each of these cases, the use of a non-locking buer
coupled with the target-generator algorithm provided
the potential for intermittent values that are inconsistent
with the desired and current target. The interferometry
project engineers conrmed that the Spin
model checker accurately modeled the software behavior
in both situations. In the rst case, a target from a
currently disabled target source may still be activated.
In the second case, a newly received target with a less-
current timestamp can overwrite an active target. How-
ever, in neither case is the software behavior contrary
to intent, given the underlying assumptions about the
operational use of the software.
Discussion
The automated analysis of the interferometer architecture
using the Spin model checker was greatly facilitated
by the availability and use of the Wright and ACME
ADLs. In eect, by using this combination of tools, we
were able to use model checking in a manner that was directed
by the structure and behavior of a software archi-
tecture. That is, the software architecture specication
was used to direct the model checking activity by facilitating
identication of potentially interesting points
of interaction in the interferometer architecture. Given
the fact that any behavior observed in the architecture is
potentially replicated among all product line members,
we found that the approach was a good complement to
the manual analysis activities.
There is an extensive body of related work on product
lines, described brie
y in Section 2.2 and in more detail
in [17]. Our work builds on product family techniques
such as Commonality Analysis [3] and the FAST process
[22], which systematically model the required similarities
and dierences among family members. The architectural
implications of product line models have been
analyzed by Perry [20], by Gomaa and Farrukh [10], and
by researchers at SEI, among others [5]. To date, the
emphasis has been on developing architectures for new
product lines rather than on evaluating the architecture
of an existing product line, as is done here.
As described in Section 3.2, the Software Architecture
Analysis Method (SAAM) is a scenario-based method
for architectural assessment. A related architectural
analysis method is the Architecture Tradeo Analysis
Method (ATAM) [15]. This iterative method is based
on identifying a set of quality attributes and associated
analysis techniques that measure an architecture along
the dimensions of the attributes. Sensitive points in
an architecture are determined by assessing the degree
to which an attribute analysis varies with variations in
the architecture. In our approach, we focus on quality
attributes that are specic to product line architec-
tures. As such, the approach can be applied in either
the SAAM or the ATAM context.
Rapide [16] is a suite of techniques and tools that
support the use of executable architectural design languages
(EADLs). The toolset supports analysis of time-sensitive
systems from the early construction phase
(e.g., architecture denition) to analysis of correctness
and performance. In our work, the motivation for choosing
a particular technique was based on a desire to eventually
transfer the technology to the project engineers.
In addition, we were interested in interoperability with
other tools. As such, we found that the ACME ADL
and associated ACMEStudio tool presented the least
amount of educational overhead. ACME also had the
advantage of being able to embed other ADLs in its
specication. However, we recognize that several alternatives
such as Rapide exist and are investigating
the possibility of performing similar analyses with those
tools.
6 CONCLUSION
The work described here identies and demonstrates a
process for analysis of an existing product line archi-
tecture. The results of the architectural recovery and
discovery are captured in an ADL model to support
subsequent inquiries. The architecture is manually analyzed
against a set of representative scenarios that have
the required quality attributes. Further analysis of critical
behaviors at the architectural level uses automated
tools and model checking to evaluate the consequences
of architectural decisions for the product line. The application
of this combined approach to the interferometer
product line architecture resulted in some measurements
of both the
exibility and limits of its architectural
style that could assist the project.
Further work is planned in several areas. In previous
work we have used formal techniques for the reverse engineering
of program code [7, 8]. We plan to investigate
how reverse engineering can also be used to assist in the
recovery of product line assets from existing repositories
or collections of programs. This may involve consideration
of dierent analysis frameworks (e.g., Rapide)
that oer fully integrated environments and investigation
of Wright/Spin translations. We also plan to pursue
the relationship between product line commonali-
ties/variabilities and analysis techniques. The observation
here that quality attributes relating to variabilities
(e.g., modiability) seem best supported by manual
analysis techniques whereas commonality attributes are
best analyzed with automated tool support (e.g., model
checking) merits further study. Finally, we would like
to make more precise the role of architectural issues in
product line decision models.
Acknowledgments
We thank Dr. John C. Kelly for his continued support of this
work. We thank Dr. Braden E. Hines, Dr. Charles E. Bell,
and Thomas G. Lockhart for helpful discussions and explanations
regarding the reuse of interferometry software. Part
of the work described in this paper was carried out at the Jet
Propulsion Laboratory, California Institute of Technology,
under a contract with the National Aeronautics and Space
Administration. Funding was provided under NASA's Code
--R
Acmestudio: A graphical design environment for acme.
A Formal Basis for Architectural Connection.
Software Architecture in Practice.
A framework for software product line practice.
A systematic approach to derive the scope of software product lines.
Strongest Post-condition as the Formal Basis for Reverse Engineer- ing
A Speci
ACME: An Architecture Description Interchange Language.
A reusable architecture for federated client/server systems.
Communicating Sequential Processes.
The Model Checker Spin.
An Event-Based Architecture De nition Language
Extending the product family approach to support safe reuse.
Exploiting architectural style to develop a family of applications.
Generic architecture descriptions for product lines.
Software Architectures: Perspectives on an Emerging Discipline.
Software Product-Line Engineering
--TR
Communicating sequential processes
Software architecture
Defining families
A formal basis for architectural connection
The Model Checker SPIN
Strongest postcondition semantics as the formal basis for reverse engineering
Software architecture in practice
A systematic approach to derive the scope of software product lines
A specification matching based approach to reverse engineering
A reusable architecture for federated client/server systems
Software product-line engineering
Extending the product family approach to support safe reuse
Scenario-Based Analysis of Software Architecture
An Event-Based Architecture Definition Language
Generic Architecture Descriptions for Product Lines
Acme
--CTR
Robyn R. Lutz , Gerald C. Gannod, Analysis of a software product line architecture: an experience report, Journal of Systems and Software, v.66 n.3, p.253-267, 15 June
H. Conrad Cunningham , Yi Liu , Cuihua Zhang, Using classic problems to teach Java framework design, Science of Computer Programming, v.59 n.1-2, p.147-169, January 2006
Femi G. Olumofin , Vojislav B. Mii, A holistic architecture assessment method for software product lines, Information and Software Technology, v.49 n.4, p.309-323, April, 2007 | software architecture analysis;software archtecture;product lines;interferometry software |
337790 | A Faster and Simpler Algorithm for Sorting Signed Permutations by Reversals. | We give a quadratic time algorithm for finding the minimum number of reversals needed to sort a signed permutation. Our algorithm is faster than the previous algorithm of Hannenhalli and Pevzner and its faster implementation by Berman and Hannenhalli. The algorithm is conceptually simple and does not require special data structures. Our study also considerably simplifies the combinatorial structures used by the analysis. | Introduction
. In this paper we study the problem of sorting signed permutations
by reversals. A signed permutation is a permutation on the
integers each number is also assigned a sign of plus or minus. A
reversal, (i; j), on transforms to
The minimum number of reversals needed to transform one permutation to another
is called the reversal distance between them. The problem of sorting signed permutations
by reversals is to nd, for a given signed permutation , a sequence of reversals
of minimum length that transforms to the identity permutation (+1;
The motivation to studying the problem arises in molecular biology: Concurrent
with the fast progress of the Human Genome Project, genetic and DNA data on many
model organisms is accumulating rapidly, and consequently the ability to compare
genomes of dierent species has grown dramatically. One of the best ways of checking
similarity between genomes on a large scale is to compare the order of appearance
of identical genes in the two species. In the Thirties, Dobzhansky and Sturtevant [7]
had already studied the notion of inversions in chromosomes of drosophila. In the
late Eighties, Jerey Palmer demonstrated that dierent species may have essentially
the same genes, but the gene orders may dier between species. Taking an abstract
perspective, the genes along a chromosome can be thought of as points along a line.
Numbers identify the particular genes; and, as genes have directionality, signs correspond
to their direction. Palmer and others have shown that the dierence in order
may be explained by a small number of reversals [17, 18, 19, 20, 12]. These reversals
correspond to evolutionary changes during the history of the two genomes, so the numA
preliminary version of this paper was presented at the Eighth ACM-SIAM Symposium on
Discrete Algorithms [13].
y AT&T-labs research, 180 Park Ave, Florham Park, NJ 07932 USA. hkl@research.att.com
z Department of Computer Science, Sackler Faculty of Exact Sciences, Tel Aviv University,
Research supported in part by a grant from the Ministry of Science
and the Arts, Israel, and by US Department of Energy, grant No. DE-FG03-94ER61913/A000.
shamir@math.tau.ac.il
x Department of Computer Science, Princeton University, Princeton, NJ 08544 USA and InterTrust
Technologies Corporation, Sunnyvale, CA 94086 USA. Research at Princeton University partially
supported by the NSF, Grants CCR-8920505 and CCR-9626862, and the O-ce of Naval Research,
Contract No. N00014-91-J-1463. ret@cs.princeton.edu
ber of reversals re
ects the evolutionary distance between the species. Hence, given
two such permutations, their reversal distance measures their evolutionary distance.
Mathematical analysis of genome rearrangement problems was initiated by Sanko
[22, 21]. Kececioglu and Sanko [16] gave the rst constant-factor polynomial approximation
algorithm for the problem and conjectured that the problem is NP-hard.
Bafna and Pevzner [3], and more recently Christie [6] improved the approximation
factor, and additional studies have revealed the rich combinatorial structure of re-arrangement
problems [15, 14, 2, 9, 10]. Quite recently, Caprara [5] has established
that sorting unsigned permutations is NP-hard, using some of the combinatorial tools
developed by Bafna and Pevzner [3].
In 1995, Hannenhalli and Pevzner [11] showed that the problem of sorting a signed
permutation by reversals is polynomial. They proved a duality theorem that equates
the reversal distance with the sum of three combinatorial parameters (see Theorem 2.3
below). Based on this theorem, they proved that sorting signed permutations by
reversals can be done in O(n 4 ) time. More recently, Berman and Hannenhalli [4]
described a faster implementation that nds a minimum sequence of reversals in
O(n 2 (n)) time, where is the inverse of Ackerman's function [1] (see also [23]).
In this study we give an O(n 2 ) algorithm for sorting a signed permutation of n
elements, thereby improving upon the previous best known bound [4]. In fact, if the
reversal distance is r, our algorithm requires O(r n In addition to
giving a better time bound, our work considerably simplies both the algorithm and
combinatorial structure needed for the analysis, as follows:
The basic object we work with is an implicit representation of the overlap graph,
to be dened later, in contrast with the interleaving graph in [11] and [4]. The
overlap graph is combinatorially simpler than the interleaving graph. As a result,
it is easier to produce a representation for the overlap graph from the input, and
to maintain it while searching for reversals.
As a consequence of our ability to work with the overlap graph we need not perform
any \padding transformations", nor do we have to work with \simple permutations"
as in [11] and [4].
We deal with the unoriented and oriented parts of the permutation separately,
which makes the algorithm much simpler.
The notion of a hurdle, one of the combinatorial entities dened by [11] for the
duality theorem, is simplied and is handled in a more symmetric manner.
The search for the next reversal is much simpler, and requires no special data
structures. Our algorithm computes connected components only once, and any
simple implementation of it su-ces to obtain the quadratic time bound. In con-
trast, in [4] a logarithmic number of connected component computations may be
performed per reversal, using the union-nd data structure.
The paper is organized as follows: Section 2 gives the necessary preliminaries. Section
3 gives an overview of our algorithm. Sections 4 and 5 give the details of our algorithm.
We summarize our results and suggest some further research in Section 6.
2. Preliminaries. This section gives the basic background, primarily the theory
of Hannenhalli and Pevzner, on which we base our algorithm. The reader may nd
it helpful to refer to Figure 2.1, in which the main denitions are illustrated. We
start with some denitions for unsigned permutations. Let
permutation of ng. Augment to a permutation on n vertices by adding
it. A pair is called a gap. Gaps
are classied into two types. breakpoint of if and only if
otherwise, it is an adjacency of . We denote by b() the number of
breakpoints in .
A reversal, (i; j), on a permutation transforms to
We say that a reversal (i; j) acts on the gaps (
a
c
4,5
2,3
Fig. 2.1. a) The breakpoint graph, B(), of the permutation
edges are solid; gray edges are dashed; oriented edges are bold. b) B() decomposes into two disjoint
alternating cycles. c) The overlap graph, OV (). Black vertices correspond to oriented edges.
2.1. The breakpoint graph. The breakpoint graph B() of a permutation
is an edge-colored graph on n
1g. We join vertices i and j by a black edge if
in and by a gray edge if (i; j) is a breakpoint in 1 .
We dene a one-to-one mapping u from the set of signed permutations of order
n into the set of unsigned permutations of order 2n as follows. Let be a signed
permutation. To obtain u(), replace each positive element x in by 2x
and each negative element x by 2x; 2x 1. For any signed permutation , let
Note that in B() every vertex is either isolated or incident to
exactly one black edge and one gray edge. Therefore, there is a unique decomposition
of B() into cycles. The edges of each cycle alternate between gray and black. Call a
reversal (i; j) such that i is odd and j even an even reversal. The reversal (2i+1; 2j)
on u() mimics the reversal (i+1; j) on . Thus, sorting by reversals is equivalent to
sorting the unsigned permutation u() by even reversals. Henceforth we will consider
the latter problem, and by a reversal we will always mean an even reversal. Let
c() be the number of cycles in B().
Figure
2.1(a) shows the breakpoint graph of the permutation
It has eight breakpoints and decomposes into two alternating cycles, i.e.
2. The two cycles are shown in Figure 2.1(b). Figure 2.2(a) shows the break-point
graph of which has seven breakpoints and decomposes
into two cycles.
For an arbitrary reversal on a permutation , dene b(;
and c(; c(). When the reversal and the permutation are clear
from the context, we will abbreviate b(; ) by b and c(; ) by c. As Bafna
and Pevzner [3] observed, the following values are taken by b and c depending on
the types of the gaps (i; acts on:
1. Two adjacencies: 2.
2. A breakpoint and an adjacency:
3. Two breakpoints each belonging to a dierent cycle:
4. Two breakpoints of the same cycle C:
a. are gray edges: 2.
b. Exactly one of
c. Neither gray edge, and when breaking C at i and
in the same
d. Neither gray edge, and when breaking C at i and
dierent paths:
Call a reversal proper if b c = 1, i.e. it is either of type 4a, 4b, or 4d.
We say that a reversal acts on a gray edge e if it acts on the breakpoints which
correspond to the black edges incident with e. A gray edge is oriented if a reversal
acting on it is proper, otherwise it is unoriented. Notice that a gray edge
oriented if and only if k + l is even. For example, the gray edge (0; 1) in the graph of
Figure
2.1(a) is unoriented, while the gray edge (7;
2.2. The overlap graph. Two intervals on the real line overlap if their intersection
is nonempty but neither properly contains the other. A graph G is an interval
overlap graph if one can assign an interval to each vertex such that two vertices are
adjacent if and only if the corresponding intervals overlap (see, e.g., [8]). For a permutation
, we associate with a gray edge the interval [i; j]. The overlap graph
of a permutation , denoted OV (), is the interval overlap graph of the gray edges
of B(). Namely, the vertex set of OV () is the set of gray edges in B(), and two
vertices are connected if the intervals associated with their gray edges overlap. We
shall identify a vertex in OV () with the edge it represents and with its interval in the
representation. Thus, the endpoints of a gray edge are actually the endpoints of the
interval representing the corresponding vertex in OV (). Note that all the endpoints
of intervals in this representation are distinct integers. A connected component of
OV () that contains an oriented edge is called an oriented component; otherwise, it
is called an unoriented component.
Figure
2.1(c) shows the interval overlap graph for It
has only one oriented component. Figure 2.2(b) shows the overlap graph of the permutation
which has two connected components, one oriented
and the other unoriented.
a
4,5
Fig. 2.2. a) The breakpoint graph of was obtained from of
Figure
2.1 by the reversal (7; 10); or, equivalently, by the reversal dened by the gray edge (2; 3).
b) The overlap graph of 0 .
2.3. The connected components of the overlap graph. Let X be a set of
gray edges in B(). Dene
Xg and Equivalently, one can look at the interval overlap
representation of OV () mentioned above and dene the span of a set of vertices
X as the minimum interval which contains all the intervals of vertices in X .
The major object our algorithm will work with is OV (), though for e-ciency
considerations we will avoid generating it explicitly. In contrast, Pevzner and Han-
nenhalli worked with the interleaving graph H , whose vertices are the alternating
cycles of B(). Two cycles C 1 and C 2 are connected by an edge in H i there exists
a gray edge e 1 2 C 1 and a gray edge e 2 2 C 2 that overlap.
The following lemma and its corollary imply that the partition imposed by the
connected components of OV () on the set of gray edges is identical to the one
imposed by the connected components of H :
Lemma 2.1. If M is a set of gray edges in B() that corresponds to a connected
component in OV () then min(M) is even and max(M) is odd.
Proof. Assume min(M) is odd. Then must both
be in span(M) (i.e. there exist l 1 span(M) such that l 1
1). Thus min(M) is neither the maximum nor the minimum element
in the set f i span(M)g. Hence, either the maximum element or the minimum
element in span(M) is j for some min(M) < j < max(M ). By the denition of B()
there must be a gray edge contradicting the fact that
M is a connected component in OV (). The proof that max(M) is odd is similar.
As an illustration of Lemma 2.1, consider Figure 2.2(a). Let M
and
[10; 15].
Corollary 2.2. Every connected component of OV () corresponds to the set of
gray edges of a union of cycles.
Proof. Assume by contradiction that C is a cycle whose gray edges belong to
at least two connected components in OV (). Assume M 1 and M 2 are two of these
components such that there are two consecutive gray edges
along C. Since the spans of dierent connected components in OV () cannot overlap
there are two dierent cases to consider.
1.
e 1 and e 2 are in dierent components they cannot overlap. Thus, either the right
endpoint of e 2 is even and equals max(M 2 ) or the left endpoint of e 2 is odd and
In both cases we have a contradiction to Lemma 2.1.
2. are disjoint intervals. W.l.o.g. assume that max(M 1 ) <
The right endpoint of e 1 is even and equals max(M 1 ), which contradicts
Lemma 2.1.
Note that in particular Corollary 2.2 implies that an overlap graph cannot contain
isolated vertices.
2.4. Hurdles. Let i 1
be the subsequence of 0; consisting
of those elements incident with gray edges that occur in unoriented components
of OV (). Order i 1
on a circle CR such that i j
for
. Let M be an unoriented connected component in
g be the set of endpoints of the edges in M . An
unoriented component M is a hurdle if the elements of E(M) occur consecutively on
CR.
This denition of a hurdle is dierent from the one given by Hannenhalli and
Pevzner [11]. It is simpler in the sense that minimal hurdles and the maximal one do
not have to be treated in dierent ways. Using Corollary 2.2 above, one can prove that
the hurdles as we have dened them are identical to the ones dened by Hannenhalli
and Pevzner. Let h() denote the number of hurdles in a permutation .
A hurdle is simple if when one deletes it from OV () no other unoriented component
becomes a hurdle, and it is a super hurdle otherwise. A fortress is a permutation
with an odd number of hurdles all of which are super hurdles.
The following theorem was proved by Hannenhalli and Pevzner.
Theorem 2.3. [11] The minimum number of reversals required to sort a permutation
is b() c() + h(), unless is a fortress, in which case exactly one
additional reversal is necessary and su-cient.
3. Overview of our algorithm. Denote by d() the reversal distance of , i.e.,
is a fortress and
Following the theory developed in [11], it turns out that given a permutation
with h() > 0 one can perform
permutation 0 such that h( 0 t. If OV () has unoriented
components then our algorithm rst nds t such reversals that transform into a 0
which has only oriented components.
Our method of \clearing the hurdles" uses the theory developed by Hannenhalli
and Pevzner. In Section 5 we describe an e-cient implementation of this process
which uses the implicit representation of the overlap graph OV (). Our implementation
runs in O(n) time assuming OV () is already partitioned into its connected
components. Recently, Berman and Hannenhalli [4] gave an O(n(n)) algorithm for
computing the connected components of an interval overlap graph given implicitly by
its representation. Using their algorithm we can clear the hurdles from a permutation
in O(n(n)) time.
The overlap graph of 0 , OV ( 0 ), has only oriented components. In Section 4 we
prove that in the neighborhood of any oriented gray edge e there is an oriented gray
could be the same as e) such that a reversal acting on e 1 does not create
new hurdles. Call such a reversal a safe reversal. We develop an e-cient algorithm
to locate a safe reversal in a permutation with at least one oriented gray edge. Our
algorithm uses only an implicit representation of the overlap graph and runs in O(n)
time.
The second stage of our algorithm repeatedly nds a safe reversal and performs
it as long as OV () is not empty. Clearly the overall complexity is O(r n
where r is the number of reversals required to sort 0 .
3.1. Representing the overlap graph. We assume that the input is given as
a sequence of n signed integers representing 0 . First the permutation
constructed as described in Section 2.1 and stored in an array. We also construct an
array representing 1 . It is straightforward to verify that with these two arrays we
can determine for each element in whether it is a left or a right endpoint of a gray
edge in constant time. In case the element is an endpoint of a gray edge we can also
nd the other endpoint and check whether the edge is oriented in constant time.
Thus the arrays and 1 comprise a representation of OV (). Our algorithm
will maintain these two arrays while carrying out the reversals that it nds. The time
to update the arrays is proportional to the length of the interval being reversed, which
is O(n). We shall give a high-level presentation of our algorithm and use primitives
like \Scan the oriented gray edges in increasing left endpoint order". It is easy to
see how to implement these primitives using the arrays and 1 ; we shall omit the
details.
It is easy to produce a list of the intervals in the representation of OV () sorted
by either left or right endpoint from the arrays and 1 . It is also possible to
maintain them without increasing the asymptotic time bound of the algorithm. In
practice it may be faster to maintain such lists instead of, or in addition to and
4. Eliminating oriented components. First we introduce some notation. Recall
that the vertices of OV () are the gray edges of B(). In order to avoid confusion
we will usually refer to them as vertices of OV (). Hence a vertex of OV () is oriented
if the corresponding gray edge is oriented and it is unoriented otherwise. Let e be a
vertex in OV (). Denote by r(e) the reversal acting on the gray edge corresponding
to e. Denote by N(e) the set of neighbors of e in OV () including e itself. Denote by
ON(e) the subset of N(e) containing the oriented vertices and by UN(e) the subset
of N(e) containing the unoriented vertices.
In this section we prove that if an oriented vertex e exists in OV () then there
exists an oriented vertex f 2 ON(e) such that r(f) is proper and safe. We also
describe an algorithm that nds a proper safe reversal in a permutation that contains
at least one oriented edge.
We start with the following useful observation:
Observation 4.1. Let e be a vertex in OV () and let
be obtained from OV () by the following operations. 1) Complement the graph induced
by OV () on N(e) feg, and
ip the orientation of every vertex in N(e) feg. 2)
If e is oriented in OV () then remove it from OV (). 3) If there exists an oriented
edge e 0 in OV () with
Note that if e is an oriented vertex in a component M of OV (), M feg may
split into several components in OV ( 0 ). (Compare gures 2.1(c) and 2.2(b).) Denote
these components by M 0
k (e), where k 1. We will refer to M 0
simply
as
whenever e is clear from the context.
Let C be a clique of oriented vertices in OV (). We say that C is happy if for
every oriented vertex e 62 C and every vertex f 2 C such that (e; f) 2 E(OV ()) there
exists an oriented vertex g 62 C such that (g; e) 2 E(OV ()) and (g; f) 62 E(OV ()).
For example, in the overlap graph shown in Figure 2.1(c) f(2; 3); (10; 11)g and f(6; 7)g
are happy cliques, but f(2; 3); (10; 11); (8; 9)g is not. Our rst theorem claims that one
of vertices in any happy clique denes a safe proper reversal.
Theorem 4.1. Let C be a happy clique and let e be a vertex in C such that
for every e 0 2 C. Then the reversal r(e) is safe.
Proof. Let assume by contradiction that M 0
i (e) is unoriented for
Assume there exists y 2 N(e) \ M 0
i such that y 62 C. Clearly y must be oriented
in OV () and since C is happy it must also have an oriented neighbor y 0 such that
not adjacent to e in OV () it stays oriented and
adjacent to y in OV ( 0 ), in contradiction with the assumption that M 0
i is unoriented.
Hence we may assume that N(e) \ M 0
i and let z 2 UN(e). Vertex z is oriented in OV ( 0 ) and if it is
adjacent to y in OV ( 0 ) we obtain a contradiction. Hence, z and y are not adjacent in
must be adjacent in OV (). Hence we obtain that UN(e) UN(y)
in OV (). Corollary 2.2 implies that component M 0
cannot contain y alone. Thus y
must have a neighbor x in M 0
x is not adjacent to e
in OV (). Thus we obtain that (x; y) 2 OV (), (x; e) 62 OV (), and x is unoriented
in OV (). Since we have already proved that UN(e) UN(y), this implies that
UN(e) UN(y), in contradiction with the choice of e.
For example Theorem 4.1 implies that the reversal dened by the gray edge
(10; 11) is a safe proper reversal for the permutation of Figure 2.1 (a), since it
corresponds to the vertex with maximum unoriented degree in the happy clique
11)g. On the other hand, the reversal dened by (2; creates a new
unoriented component, as it yields the permutation shown in Figure 2.2.
The following theorem proves that a happy clique exists in the neighborhood of
any oriented edge.
Theorem 4.2. Let e be an oriented vertex in OV (). There exists an oriented
vertex f 2 ON(e) such that for all the components in OV ( 0 ) are oriented
Proof. By Theorem 4.1 it su-ces to show that there exists a happy clique C in
ON(e).
there exists y 2 ON(x) such that y 62 ON(e)g.
That is, Ext(e) contains all oriented neighbors of e which have oriented neighbors
outside of ON(e).
Case 1:
Case 2: Ext(e) ON(e) feg. Let D
not a clique let K j be a maximal clique in D j and dene D
be the nal clique and set
It is straightforward to verify that in each of the two cases C is indeed a happy clique.
In the next section we describe an algorithm that will nd an oriented edge e
such that r(e) is safe given the representation of OV () described in Section 3.1. The
algorithm rst nds a happy clique C and then searches for the vertex with maximum
unoriented degree in C. According to Theorem 4.1 this vertex denes a safe reversal.
Even though Theorem 4.2 guarantees the existence of a happy clique in the neighborhood
of any xed oriented vertex, our algorithm does not search in one particular
such neighborhood. We will prove that the algorithm is guaranteed to nd a happy
clique assuming that there exists at least one oriented edge. Therefore the algorithm
provides an alternative proof to a weaker version of Theorem 4.2 that only claims the
existence of a happy clique somewhere in the graph.
4.1. Finding a happy clique. In this section we give an algorithm that
locates a happy clique in OV (). Let e be the oriented vertices in OV () in
increasing left endpoint order. The algorithm traverses the oriented vertices in OV ()
according to this order. Let L(e) and R(e) be the left and right endpoints, respectively,
of vertex e in the realization of OV (). After traversing e the
algorithm maintains a happy clique C i in the subgraph of OV () induced by these
vertices. Assume jC
be the vertices in C i where
. The vertices of C i are maintained in a linked list ordered in
increasing left endpoint order. If there exists an interval that contains all the intervals
in C i then the algorithm maintains a minimal such interval t i . The clique C i and the
vertex t i (if exists) satisfy the following invariant.
Invariant 4.1.
Every vertex e l 62 C i , l i, such that L(e i 1
must be adjacent to t i , i.e.,
) that is adjacent to a vertex in C i is either
adjacent to an interval e p such that R(e p ) < L(e
or adjacent to t i .
The fact that C i is happy in the subgraph induced by e
this invariant. We initialize the algorithm by setting C g. Initially, t 1 is not
dened. Let the current interval be e i+1 . If R(e i j
guaranteed
to be happy in OV () since all remaining oriented vertices are not adjacent to C i .
Hence the algorithm stops and returns C i as the answer. See Figure 4.1(a).
We now assume that L(e i+1
show how to obtain C i+1 and t i+1 .
We have to consider the following cases.
Case 1. The interval t i is dened and
Figure 4.1(b).
Case 2. The interval t i is not dened or R(e i+1
a)
obtained by adding e i+1 to C i and
Figure 4.1(c).
). The clique C i+1 consists of e i+1 alone and
Figure 4.1(d).
c)
). As in the previous case C g. In this case t i+1 is set
to e i j
, the last interval in C i . See Figure 4.1(e).
The following theorem proves that the algorithm above produces a happy clique.
Theorem 4.3. Let C l be the current clique when the algorithm stops. Then C l
is a happy clique in OV ().
Proof. A straightforward induction on the number of oriented vertices traversed
by the algorithm proves that C l and t l satisfy Invariant 4.1.
The algorithm stops either when R(e
when l is equal to the
number of oriented vertices. In either case since C l is happy in the subgraph induced
by e must be happy in OV ().
The running time of the algorithm is proportional to the number of oriented
vertices traversed since a constant amount of work is performed for each such vertex.
4.2. Searching the happy clique. After locating a happy clique C in OV ()
we need to search it for a vertex with a maximum number of unoriented neighbors.
In this section we give an algorithm that performs this task.
d
a
e
c
Fig. 4.1. The various cases of the algorithm for nding a happy clique. The topmost interval
is always t i . The three thick intervals comprise C i . The dotted interval corresponds to e i+1 .
be the intervals in C ordered in increasing left endpoint order.
Clearly, R(j). Thus the endpoints
of the j vertices in C partition the line into 2j
I
1). The algorithm consists of the
following three stages.
Stage 1: Let e be an unoriented vertex that has a non-empty intersection with the
interval [L(1); R(j)]. Mark each of e's endpoints with the index of the interval that
contains it.
Stage 2: Let o be an array of j counters, each corresponding to a vertex in C. The
intention is to assign values to o such that the sum P l
o[i] is the unoriented degree
of the vertex e l 2 C. The counters are initialized to zero. For each unoriented vertex
e that overlaps with the interval [L(1); R(j)] we change at most four of the counters
as follows. Let I l and I r be the intervals in which L(e) and R(e) occur, respectively.
We may assume l < r as otherwise e is not adjacent to any vertex in C and we can
ignore it. We continue according to one of the following cases.
Case 1: r j. All the vertices from e l+1 to e r are adjacent to e: we increment o[l
and decrement o[r
Case 2: j l. All the vertices from e l j+1 to e r j are adjacent to e: we increment
decrement
Case 3: l < j and j < r. Let all the vertices from
e 1 to e m are adjacent to e: we increment o[1] and decrement o[m
then the vertices from e l+1 to e j are adjacent to e: we
increment the counter o[l
Stage 3: Compute
jg. Return e f .
The following theorem summarizes the result of this section. We omit the proof,
which is straightforward.
Theorem 4.4. Given a clique C, the vertex e f 2 C computed by the algorithm
above has maximum unoriented degree among the vertices in C.
The complexity of the algorithm is proportional to the size of C plus the number
of unoriented vertices in OV (), and hence is O(n).
5. Clearing the hurdles. In case there are unoriented components in OV (),
there exists a sequence r of t reversals that transform into 0 such that
t, where dh()=2e. In this section we summarize the characterization given
by Hannenhalli and Pevzner for these t reversals and outline how to nd them using
our implicit representation of OV ().
We will use the following denitions. A reversal merges hurdles H 1 and H 2 if it
acts on two breakpoints, one incident with a gray edge in H 1 and the other incident
with a gray edge in H 2 . Recall the circle CR dened in Section 2, in which the
endpoints of the edges in the unoriented components of OV () are ordered consistently
with their order in . Two hurdles H 1 and H 2 are consecutive if their sets of endpoints
occur consecutively on CR, i.e., there is no hurdle H such that
E(H) separates E(H 1 ) and E(H 2 ) on CR.
The following lemmas were essentially proved by Hannenhalli and Pevzner though
stated dierently in their paper.
Lemma 5.1 ([11]). Let be a permutation with an even number, say 2k, of
hurdles. Any sequence of k 1 reversals each of which merges two non-consecutive
hurdles followed by a reversal merging the remaining two hurdles will transform into
0 such that has only oriented components.
Lemma 5.2 ([11]). Let be a permutation with an odd number, say 2k
hurdles. If at least one hurdle H is simple then a reversal acting on two breakpoints
incident with edges in H transforms into 0 with 2k hurdles such that
d() 1. If is a fortress then a sequence of k 1 reversals merging pairs of non-consecutive
hurdles followed by two additional merges of pairs of consecutive hurdles
(one merges two original hurdles and the next merges a hurdle created by the rst and
the last original hurdle) will transform into 0 such that
0 has only oriented components.
We now outline how to turn these lemmas into an algorithm that nds a particular
sequence of reversals r with the properties described above. First OV () is
decomposed into connected components as described in [4]. One then has to identify
those unoriented components that are hurdles. This task can be done by traversing the
endpoints of the circle CR, counting the number of elements in each run of consecutive
endpoints belonging to the same component. If a run contains all endpoints of a
particular unoriented component M then M is an hurdle.
In a similar fashion one can check for each hurdle whether it is a simple hurdle
or a super hurdle. While traversing the cycle, a list of the hurdles in the order they
occur on CR is created. At the next stage this list is used to identify correct hurdles
to merge.
We assume that given an endpoint one can locate its connected component in
constant time. It is easy to verify that the data can be maintained so that this is
possible.
Theorem 5.3. Given OV () decomposed into its connected components, the
algorithm outlined above nds t reversals such that when we apply them to we obtain
a 0 which is hurdle-free and has t. The algorithm can be implemented
to run in O(n) time.
Proof. Correctness follows from Lemma 5.1 and 5.2. The time bound is achieved
if we always merge hurdles that are separated by a single hurdle. If the ith merge
merged hurdles H 1 and H 2 that are separated by H , then H should be merged in the
1st merge. Carrying out the merges this way guarantees that the span of each
hurdle H overlaps at most two merging reversals, the second of which eliminates H .
6.
Summary
. Figure 6.1 gives a schematic description of the algorithm.
algorithm Signed Reversals();
/* is a signed permutation */
1. Compute the connected components of OV ().
2. Clear the hurdles.
3. while is not sorted do :
iteration */
begin
a. nd a happy clique C in OV ().
b. nd a vertex e f 2 C with maximum unoriented
degree, and perform a safe reversal on e f ;
c. update and the representation of OV ().
4. output the sequence of reversals.
Fig. 6.1. An algorithm for sorting signed permutations
Theorem 6.1. Algorithm Signed Reversals nds the reversal distance r in
n) time, and in particular in O(n 2 ) time.
Proof. The correctness of the algorithm follows from Theorem 2.3, Theorem 4.1
and Lemmas 5.1 and 5.2.
by the algorithm of Berman and Hannenhalli [4].
takes O(n) time by Theorem 5.3. Step 3 takes O(n) time per reversal, by the
discussion in Section 4.
It is an intriguing open question whether a faster algorithm for sorting signed
permutations by reversals exists. It certainly might be the case that one can nd an
optimal sequence of reversals faster. To date, no nontrivial lower bound is known for
this problem.
Acknowledgments
. We thank Donald Knuth, Sridhar Hannenhalli, Pavel Pevzner,
and Itsik Pe'er for their comments on a preliminary version of this paper.
--R
Zum hilbertshen aufbau der reelen zahlen
Sorting permutations by transpositions
SIAM Journal on Computing
Fast
Sorting by reversals is di-cult
Inversions in the chromosomes of drosophila pseu- doobscura
Algorithmic Graph Theory and Perfect Graphs
Polynomial algorithm for computing translocation distance between genomes
Transforming men into mice (polynomial algorithm for genomic distance problems
Transforming cabbage into turnip (polynomial algorithm for sorting signed permutations by reversals)
including parallel inversions
Faster and simpler algorithm for sorting signed permutations by reversals
Physical mapping of chromosomes using unique probes
Tricircular mitochondrial genomes of Brassica and Raphanus: reversal of repeat con
Evolutionalry signi
Edit distance for genome comparison based on non-local operations
Genomic divergence through gene rearrangement.
--TR
--CTR
Tannier , Anne Bergeron , Marie-France Sagot, Advances on sorting by reversals, Discrete Applied Mathematics, v.155 n.6-7, p.881-888, April, 2007
Anne Bergeron, A very elementary presentation of the Hannenhalli-Pevzner theory, Discrete Applied Mathematics, v.146 n.2, p.134-145, 1 March 2005
Glenn Tesler, Efficient algorithms for multichromosomal genome rearrangements, Journal of Computer and System Sciences, v.65 n.3, p.587-609, November 2002
Adam C. Siepel, An algorithm to enumerate all sorting reversals, Proceedings of the sixth annual international conference on Computational biology, p.281-290, April 18-21, 2002, Washington, DC, USA
Max A. Alekseyev , Pavel A. Pevzner, Colored de Bruijn Graphs and the Genome Halving Problem, IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), v.4 n.1, p.98-107, January 2007
Haim Kaplan , Elad Verbin, Sorting signed permutations by reversals, revisited, Journal of Computer and System Sciences, v.70 n.3, p.321-341, May 2005
Isaac Elias , Tzvika Hartman, A 1.375-Approximation Algorithm for Sorting by Transpositions, IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), v.3 n.4, p.369-379, October 2006
Severine Berard , Anne Bergeron , Cedric Chauve , Christophe Paul, Perfect Sorting by Reversals Is Not Always Difficult, IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), v.4 n.1, p.4-16, January 2007 | reversal distance;computational molecular biology;sorting permutations |
338410 | Strategies in Combined Learning via Logic Programs. | We discuss the adoption of a three-valued setting for inductive concept learning. Distinguishing between what is true, what is false and what is unknown can be useful in situations where decisions have to be taken on the basis of scarce, ambiguous, or downright contradictory information. In a three-valued setting, we learn a definition for both the target concept and its opposite, considering positive and negative examples as instances of two disjoint classes. To this purpose, we adopt Extended Logic Programs (ELP) under a Well-Founded Semantics with explicit negation (WFSX) as the representation formalism for learning, and show how ELPs can be used to specify combinations of strategies in a declarative way also coping with contradiction and exceptions.Explicit negation is used to represent the opposite concept, while default negation is used to ensure consistency and to handle exceptions to general rules. Exceptions are represented by examples covered by the definition for a concept that belong to the training set for the opposite concept.Standard Inductive Logic Programming techniques are employed to learn the concept and its opposite. Depending on the adopted technique, we can learn the most general or the least general definition. Thus, four epistemological varieties occur, resulting from the combination of most general and least general solutions for the positive and negative concept. We discuss the factors that should be taken into account when choosing and strategically combining the generality levels for positive and negative concepts.In the paper, we also handle the issue of strategic combination of possibly contradictory learnt definitions of a predicate and its explicit negation.All in all, we show that extended logic programs under well-founded semantics with explicit negation add expressivity to learning tasks, and allow the tackling of a number of representation and strategic issues in a principled way.Our techniques have been implemented and examples run on a state-of-the-art logic programming system with tabling which implements WFSX. | Introduction
Most work on inductive concept learning considers a two-valued setting. In such a
setting, what is not entailed by the learned theory is considered false, on the basis
of the Closed World Assumption (CWA) [44]. However, in practice, it is more often
the case that we are confident about the truth or falsity of only a limited number of
facts, and are not able to draw any conclusion about the remaining ones, because
the available information is too scarce. Like it has been pointed out in [13, 10], this
is typically the case of an autonomous agent that, in an incremental way, gathers
information from its surrounding world. Such an agent needs to distinguish between
what is true, what is false and what is unknown, and therefore needs to learn within
a richer three-valued setting.
To this purpose, we adopt the class of Extended Logic Programs (ELP, for short,
in the sequel) as the representation language for learning in a three-valued setting.
ELP contains two kinds of negation: default negation plus a second form of nega-
tion, called explicit, whose combination has been recognized elsewhere as a very
expressive means of knowledge representation. The adoption of ELP allows one to
deal directly in the language with incomplete and contradictory knowledge, with
exceptions through default negation, as well as with truly negative information by
means of explicit negation [39, 2, 3]. For instance, in [2, 5, 16, 9, 32] it is shown
how ELP is applicable to such diverse domains of knowledge representation as concept
hierarchies, reasoning about actions, belief revision, counterfactuals, diagnosis,
updates and debugging.
In this work, we show, for the first time in a journal, that various approaches
and strategies can be adopted in Inductive Logic Programming (ILP, henceforth)
for learning with ELP under an extension of well-founded semantics. As in [25,
24], where answer-sets semantics is used, the learning process starts from a set of
positive and negative examples plus some background knowledge in the form of
an extended logic program. Positive and negative information in the training set
are treated equally, by learning a definition for both a positive concept p and its
(explicitly) negated concept :p. Coverage of examples is tested by adopting the
SLX interpreter for ELP under the Well-Founded Semantics with explicit negation
defined in [2, 16], and valid for its paraconsistent version [9].
negation is used in the learning process to handle exceptions to general
rules. Exceptions to a positive concept are identified from negative examples,
whereas exceptions to a negative concept are identified from positive examples.
In this work, we adopt standard ILP techniques to learn some concept and its
opposite. Depending on the technique adopted, one can learn the most general
or the least general definition for each concept. Accordingly, four epistemological
varieties occur, resulting from the mutual combination of most general and least
general solutions for the positive and negative concept. These possibilities are
expressed via ELP, and we discuss some of the factors that should be taken into
account when choosing the level of generality of each, and their combination, to
define a specific learning strategy, and how to cope with contradictions. (In the
paper, we concentrate on single predicate learning, for the sake of simplicity.)
Indeed, separately learned positive and negative concepts may conflict and, in
order to handle possible contradiction, contradictory learned rules are made defeasible
by making the learned definition for a positive concept p depend on the
default negation of the negative concept :p, and vice-versa. I.e., each definition
is introduced as an exception to the other. This way of coping with contradiction
can be even generalized for learning n disjoint classes, or modified in order to take
into account preferences among multiple learning agents or information sources (see
[28]).
The paper is organized as follows. We first motivate the use of ELP as target and
background language in section 2, and introduce the new ILP framework in section
3. We then examine, in section 4, factors to be taken into account when choosing
the level of generality of learned theories. Section 5 proposes how to combine the
learned definitions within ELP in order to avoid inconsistencies on unseen atoms
and their opposites, through the use of mutually defeating ("non-deterministic")
rules, and how to incorporate exceptions through negation by default. A description
of our algorithm for learning ELP follows next, in section 6, and the overall
system implementation in section 7. Section 8 evaluates the obtained classification
accuracy. Finally, we examine related works in section 9, and conclude.
2. Logic Programming and Epistemic Preliminaries
In this section, we first discuss the motivation for three-valuedness and two types
of negation in knowledge representation and provide basic notions of extended logic
programs and W FSX.
2.1. Three-valuedness, default and explicit negation
Artificial Intelligence (AI) needs to deal with knowledge in flux, and less than
perfect conditions, by means of more dynamic forms of logic than classical logic.
Much of this has been the focus of research in Logic Programming (LP), a field
of AI which uses logic directly as a programming language 1 , and provides specific
implementation methods and efficient working systems to do so 2 .
Horn clause notation is used to express that conclusions must be supported,
"caused", by some premises. Implication is unidirectional, i.e., not contrapositive:
"causes" do not run backwards.
Various extensions of LP have been introduced to cope with knowledge representation
issues. For instance, default negation of an atom P , "not P ", was introduced
by AI to deal with lack of information, a common situation in the real world. It
introduces non-monotonicity into knowledge representation. Indeed, conclusions
might not be solid because the rules leading to them may be defeasible. For in-
stance, we don't normally have explicit information about who is or is not the
lover of whom, though that kind of information may arrive unexpectedly. Thus we
not lover(H; L)
I.e., if we have no evidence to conclude lover(H; L) for some L given H , we can
assume it false for all L given H .
Mark that not should grant positive and negative information equal standing.
That is, we should equally be able to write:
to model instead a world where people are unfaithful by default or custom, and
where it is required to explicitly prove that someone does not take any lover before
concluding that person not unfaithful.
Since information is normally expressed positively, by dint of mental and linguistic
economics, through Closed World Assumption (CWA), the absent, non explicitly
obtainable information, is usually the negation of positive information. Which
means, when no information is available about lovers, that :lover(H; L) is true by
CWA, whereas lover(H; L) is not. Indeed, whereas the CWA is indispensable in
some contexts, viz. at airports flights not listed are assumed non-existent, in others
that is not so: though one's residence might not be listed in the phone book, it may
not be ruled out that it exists and has a phone.
These epistemologic requisites can be reconciled by reading ':' above not as
classical negation, which complies with the excluded middle provision, but as yet
a new form of negation, dubbed in Logic Programming "explicit negation" [39]
(which ignores that provision), and adopted in ELP.
This requires the need for revising assumptions and for introducing a third truth-
value, named "undefined", into the framework. In fact, when we combine, for
instance, the viewpoints of the two above worlds about faithfulness we become
confused: assuming married(H; K) for some H and K; it now appears that both
faithful(H; K) and :faithful(H; K) are contradictorily true. Indeed, since we
have no evidence for lover(H; L) nor :lover(H; L) because there simply is no information
about them, we make two simultaneous assumptions about their falsity.
But when any assumption leads to contradiction one should retract it, which in a
three-valued setting means making it undefined.
The imposition of undefinedness for lover(H; L) and :lover(H; L) can be achieved
simply, by adding to our knowledge the clauses:
:lover(H; L) / not lover(H; L)
lover(H; L) / not :lover(H; L)
thereby making faithful(H; K) and :faithful(H; K) undefined too. Given no
other information, we can now prove neither of lover(H; L) nor :lover(H; L) true,
or false. Any attempt to do it runs into a self-referential circle involving default
negation, and so the safest, skeptical, third option is to take no side in this marital
dispute, and abstain from believing either.
Even in presence of self-referential loops involving default negations, the well-founded
semantics of logic programs (WFS) assigns to the literals in the above two
clauses the truth value undefined, in its knowledge skeptical well-founded model,
but allows also for the other two, incompatible non truth-minimal, more credulous
models.
2.2. Extended Logic Programs
An extended logic program is a finite set of rules of the form:
with n 0, where L 0 is an objective literal, L are literals and each rule
stands for the sets of its ground instances. Objective literals are of the form A
or :A, where A is an atom, while a literal is either an objective literal L or its
default negation not L. :A is said the opposite literal of A (and vice versa),
not A the complementary literal of A (and vice versa).
By not fA we mean fnot A not An g where A i s are literals. By
we mean f:A g. The set of all objective literals of a
program P is called its extended Herbrand base and is represented as H E (P ). An
interpretation I of an extended program P is denoted by T [ not F , where T and
F are disjoint subsets of H E (P ). Objective literals in T are said to be true in I ,
objective literals in F are said to be false in I and those in H are said to be
undefined in I . We introduce in the language the proposition u that is undefined
in every interpretation I .
WFSX extends the well founded semantics (WFS ) [46] for normal logic programs
to the case of extended logic programs. WFSX is obtained from WFS by adding
the coherence principle relating the two forms of negation: "if L is an objective
literal and :L belongs to the model of a program, then also not L belongs to the
model ", i.e., :L ! not L.
Notice that, thanks to this principle, any interpretation I F of an
extended logic program P considered by WFSX semantics is non-contradictory,
i.e., there is no pair of objective literals A and :A of program P such that A
belongs to T and :A belongs to T [2]. The definition of WFSX is reported in
Appendix
. If an objective literal A is true in the WFSX of an ELP P we write
A.
Let us now show an example of WFSX in the case of a simple program.
Example: Consider the following extended logic program:
a / b:
A WFSX model of this program is not :b; not ag: :a is true, a is false,
:b is false (there are no rules for :b) and b is undefined. Notice that not a is in the
model since it is implied by :a via the coherence principle.
One of the most important characteristic of WFSX is that it provides a semantics
for an important class of extended logic programs: the set of non-stratified pro-
grams, i.e., the set of programs that contain recursion through default negation.
An extended logic program is stratified if its dependency graph does not contain
any cycle with an arc labelled with \Gamma. The dependency graph of a program P is
a labelled graph with a node for each predicate of P and an arc from a predicate
p to a predicate q if q appears in the body of clauses with p in the head. The arc
is labelled with appears in an objective literal in the body and with \Gamma if it
appears in a default literal.
Non-stratified programs are very useful for knowledge representation because
the WFSX semantics assigns the truth value undefined to the literals involved
in the recursive cycle through negation, as shown in section 2.1 for lover(H; L)
and :lover(H; L). In section 5 we will employ non stratified programs in order to
resolve possible contradictions.
WFSX was chosen among the other semantics for extended logic programs,
answer-sets [21] and three-valued strong negation [3], because none of the others
enjoy the property of relevance [2, 3] for non-stratified programs, i.e., they cannot
have top-down querying procedures for non-stratified programs. Instead, for WFSX
there exists a top-down proof procedure SLX [2], which is correct with respect to
the semantics.
Cumulativity is also enjoyed by WFSX, i.e., if you add a lemma then the semantics
does not change (see [2]). This property is important for speeding-up the implemen-
tation. By memorizing intermediate lemmas through tabling, the implementation
of SLX greatly improves. Answer-set semantics, however, is not cumulative for
non-stratified programs and thus cannot use tabling.
The SLX top-down procedure for WFSX relies on two independent kinds of
derivations: T-derivations, proving truth, and TU-derivations proving non-falsity,
i.e., truth or undefinedness. Shifting from one to the other is required for proving a
default literal not L: the T-derivation of not L succeeds if the TU-derivation of L
fails; the TU-derivation of not L succeeds if the T-derivation of L fails. Moreover,
the T-derivation of not L also succeeds if the T-derivation of :L succeeds, and
the TU-derivation of L fails if the T-derivation of :L succeeds (thus taking into
account the coherence principle). Given a goal G that is a conjunction of literals,
if G can be derived by SLX from an ELP P , we write P 'SLX G
The SLX procedure is amenable to a simple pre-processing implementation, by
mapping WFSX programs into WFS programs through the T-TU transformation
[8]. This transformation is linear and essentially doubles the number of program
clauses. Then, the transformed program can be executed in XSB, an efficient
logic programming system which implements (with polynomial complexity) the
WFS with tabling, and subsumes Prolog. Tabling in XSB consists in memoizing
intermediate lemmas, and in properly dealing with non-stratification according to
WFS. Tabling is important in learning, where computations are often repeated for
testing the coverage or otherwise of examples, and allows computing the WFS with
simple polynomial complexity on program size.
3. Learning in a Three-valued Setting
In real-world problems, complete information about the world is impossible to
achieve and it is necessary to reason and act on the basis of the available partial
information. In situations of incomplete knowledge, it is important to distinguish
between what is true, what is false, and what is unknown or undefined.
Such a situation occurs, for example, when an agent incrementally gathers information
from the surrounding world and has to select its own actions on the
basis of such acquired knowledge. If the agent learns in a two-valued setting, it
can encounter the problems that have been highlighted in [13]. When learning in
a specific to general way, it will learn a cautious definition for the target concept
and it will not be able to distinguish what is false from what is not yet known (see
figure 1a). Supposing the target predicate represents the allowed actions, then the
agent will not distinguish forbidden actions from actions with an outcome and this
can restrict the agent acting power. If the agent learns in a general to specific way,
instead, it will not know the difference between what is true and what is unknown
(figure 1b) and, therefore, it can try actions with an unknown outcome. Rather,
by learning in a three-valued setting, it will be able to distinguish between allowed
actions, forbidden actions, and actions with an unknown outcome (figure 1c). In
this way, the agent will know which part of the domain needs to be further explored
and will not try actions with an unknown outcome unless it is trying to expand its
knowledge.
Figure
1. (taken from [13])(a,b): two-valued setting, (c): three-valued setting
We therefore consider a new learning problem where we want to learn an ELP from
a background knowledge that is itself an ELP and from a set of positive and a set of
negative examples in the form of ground facts for the target predicates. A learning
problem for ELP's was first introduced in [25] where the notion of coverage was
defined by means of truth in the answer-set semantics. Here the problem definition
is modified to consider coverage as truth in the preferred WFSX semantics
Definition 1. [Learning Extended Logic Programs]
Given:
ffl a set P of possible (extended logic) programs
ffl a set E + of positive examples (ground facts)
ffl a set E \Gamma of negative examples (ground facts)
ffl a non-contradictory extended logic program B (background knowledge 4 )
Find:
ffl an extended logic program P 2 P such that
Eg.
We suppose that the training sets E are disjoint. However, the system is
also able to work with overlapping training sets.
The learned theory will contain rules of the form:
for every target predicate p, where ~
stands for a tuple of arguments. In order to
satisfy the completeness requirement, the rules for p will entail all positive examples
while the rules for :p will entail all (explicitly negated) negative examples. The
consistency requirement is satisfied by ensuring that both sets of rules do not entail
instances of the opposite element in either of the training sets.
Note that, in the case of extended logic programs, the consistency with respect
to the training set is equivalent to the requirement that the program is non-contradictory
on the examples. This requirement is enlarged to require that the
program be non-contradictory also for unseen atoms, i.e., B[P 6j= L":L for every
atom L of the target predicates.
We say that an example e is covered by program P if P e. Since the
SLX procedure is correct with respect to WFSX, even for contradictory programs,
coverage of examples is tested by verifying whether P 'SLX e.
Our approach to learning with extended logic programs consists in initially applying
conventional ILP techniques to learn a positive definition from E
and a negative definition from In these techniques, the SLX procedure
substitutes the standard Logic Programming proof procedure to test the coverage
of examples.
The ILP techniques to be used depend on the level of generality that we want to
have for the two definitions: we can look for the Least General Solution (LGS) or
the Most General Solution (MGS) of the problem of learning each concept and its
complement. In practice, LGS and MGS are not unique and real systems usually
learn theories that are not the least nor most general, but closely approximate one
of the two. In the following, these concepts will be used to signify approximations
to the theoretical concepts.
LGSs can be found by adopting one of the bottom-up methods such as relative
least general generalization (rlgg) [40] and the GOLEM system [37], inverse resolution
[36] or inverse entailment [30]. Conversely, MGSs can be found by adopting a
top-down refining method (cf. [31]) and a system such as FOIL [43] or Progol [35].
4. Strategies for Combining Different Generalizations
The generality of concepts to be learned is an important issue when learning in a
three-valued setting. In a two-valued setting, once the generality of the definition
is chosen, the extension (i.e., the generality) of the set of false atoms is undesirably
and automatically decided, because it is the complement of the true atoms set.
In a three-valued setting, rather, the extension of the set of false atoms depends
on the generality of the definition learned for the negative concept. Therefore,
the corresponding level of generality may be chosen independently for the two
definitions, thus affording four epistemological cases. The adoption of ELP allows
to express case combination in a declarative and smooth way.
Furthermore, the generality of the solutions learned for the positive and negative
concepts clearly influences the interaction between the definitions. If we learn the
MGS for both a concept and its opposite, the probability that their intersection is
non-empty is higher than if we learn the LGS for both. Accordingly, the decision
as to which type of solution to learn should take into account the possibility of
interaction as well: if we want to reduce this possibility, we have to learn two LGS,
if we do not care about interaction, we can learn two MGS. In general, we may learn
different generalizations and combine them in distinct ways for different strategic
purposes within the same application problem.
The choice of the level of generality should be made on the basis of available
knowledge about the domain. Two of the criteria that can be taken into account
are the damage or risk that may arise from an erroneous classification of an unseen
object, and the confidence we have in the training set as to its correctness and
representativeness.
When classifying an as yet unseen object as belonging to a concept, we may later
discover that the object belongs to the opposite concept. The more we generalize
a concept, the higher is the number of unseen atoms covered by the definition and
the higher is the risk of an erroneous classification. Depending on the damage that
may derive from such a mistake, we may decide to take a more cautious or a more
confident approach. If the possible damage from an over extensive concept is high,
then one should learn the LGS for that concept, if the possible damage is low then
one can generalize the most and learn the MGS. The overall risk will depend too
on the use of the learned concepts within other rules.
The problem of selecting a solution of an inductive problem according to the cost
of misclassifying examples has been studied in a number of works. PREDICTOR
[22] is able to select the cautiousness of its learning operators by means of meta-
heuristics. These metaheuristics make the selection based on a user-input penalty
for prediction error. [41] provides a method to select classifiers given the cost of
misclassifications and the prior distribution of positive and negative instances. The
method is based on the Receiver Operating Characteristic (ROC) graph from signal
theory that depicts classifiers as points in a graph with the number of false positive
on the X axis and the number of true positive on the Y axis. In [38] it is discussed
how the different costs of misclassifying examples can be taken into account into a
number of algorithms: decision tree learners, Bayesian classifiers and decision lists
learners. The Reduced Cost Algorithm is presented that selects and order rules
after they have been learned in order to minimize misclassification costs. More-
over, an algorithm for pruning decision lists is presented that attempts to minimize
costs while avoiding overfitting. In [23] it is discussed how the penalty incurred if
a learner outputs the wrong classification is considered in order to decide whether
to acquire additional information in an active learner.
As regards the confidence in the training set, we can prefer to learn the MGS for
a concept if we are confident that examples for the opposite concept are correct and
representative of the concept. In fact, in top-down methods, negative examples are
used in order to delimit the generality of the solution. Otherwise, if we think that
examples for the opposite concept are not reliable, then we should learn the LGS.
In the following, we present a realistic example of the kind of reasoning that can
be used to choose and specify the preferred level of generality, and discuss how to
strategically combine the different levels by employing ELP tools to learning.
Example: Consider a person living in a bad neighbourhood in Los Angeles. He
is an honest man and to survive he needs two concepts, one about who is likely to
attack him, on the basis of appearance, gang membership, age, past dealings, etc.
he wants to take a cautious approach, he maximizes attacker and minimizes
:attacker, so that his attacker1 concept allows him to avoid dangerous situations.
Another concept he needs is the type of beggars he should give money to (he is
a good man) that actually seem to deserve it, on the basis of appearance, health,
age, etc. Since he is not rich and does not like to be tricked, he learns a beggar1
concept by minimizing beggar and maximizing :beggar, so that his beggar concept
allows him to give money strictly to those appearing to need it without faking.
However rejected beggars, especially malicious ones, may turn into attackers, in
this very bad neighbourhood. Consequently, if he thinks a beggar might attack
him he had better be more permissive about who is a beggar and placate him with
money. In other words, he should maximize beggar and minimize :beggar in a
beggar2 concept.
These concepts can be used in order to minimize his risk taking when he carries, by
his standards, a lot of money and meets someone who is likely to be an attacker,
with the following kind of reasoning:
lot of money(X);
lot of money(X); give money(X; Y )
give
give
If he does not have a lot of money on him, he may prefer not to run as he risks
being beaten up. In this case he has to relax his attacker concept into attacker2,
but not relax it so much that he would use :attackerMGS .
The various notions of attacker and beggar are then learnt on the basis of previous
experience the man has had (see [29]).
5. Strategies for Eliminating Learned Contradictions
The learnt definitions of the positive and negative concepts may overlap. In this
case, we have a contradictory classification for the objective literals in the intersec-
tion. In order to resolve the conflict, we must distinguish two types of literals in
the intersection: those that belong to the training set and those that do not, also
dubbed unseen atoms (see figure 2).
In the following, we discuss how to resolve the conflict in the case of unseen
literals and of literals in the training set. We first consider the case in which the
training sets are disjoint, and we later extend the scope to the case where there is a
non-empty intersection of the training sets, when they are less than perfect. From
now onwards, ~
stands for a tuple of arguments.
For unseen literals, the conflict is resolved by classifying them as undefined, since
the arguments supporting the two classifications are equally strong. Instead, for
literals in the training set, the conflict is resolved by giving priority to the classification
stipulated by the training set. In other words, literals in a training set that
are covered by the opposite definition are made as exceptions to that definition.
Figure
2. Interaction of the positive and negative definitions on exceptions.
Contradiction on Unseen Literals For unseen literals in the intersection, the
undefined classification is obtained by making opposite rules mutually defeasible,
or "non-deterministic" (see [5, 2]). The target theory is consequently expressed in
the following way:
not :p( ~
not p( ~
X)
are, respectively, the definitions learned for the positive
and the negative concept, obtained by renaming the positive predicate by p + and its
explicit negation by p \Gamma . From now onwards, we will indicate with these superscripts
the definitions learned separately for the positive and negative concepts.
We want p( ~
X) and :p( ~
X) each to act as an exception to the other. In case
of contradiction, this will introduce mutual circularity, and hence undefinedness
according to WFSX. For each literal in the intersection of p + and are
two stable models, one containing the literal, the other containing the opposite
literal. According to WFSX, there is a third (partial) stable model where both
literals are undefined, i.e., no literal p( ~
X), :p( ~
X), not p( ~
X) or not :p( ~
to the well-founded (or least partial stable) model. The resulting program contains
a recursion through negation (i.e., it is non-stratified) but the top-down SLX procedure
does not go into a loop because it comprises mechanisms for loop detection
and treatment, which are implemented by XSB through tabling.
Example: Let us consider the Example of section 4. In order to avoid contradictions
on unseen atoms, the learned definitions must be:
MGS (X); not :attacker1(X)
LGS (X); not attacker1(X)
LGS (X); not :beggar1(X)
MGS (X); not beggar1(X)
MGS (X); not :beggar2(X)
LGS (X); not beggar2(X)
LGS (X); not :attacker2(X)
LGS (X); not attacker2(X)
Note that p
X)
X) can display as well the undefined truth value, either
because the original background is non-stratified or because they rely on some
definition learned for another target predicate, which is of the form above and
therefore non-stratified. In this case, three-valued semantics can produce literals
with the value "undefined", and one or both of p
X)
X) may be undefined.
If one is undefined and the other is true, then the rules above make both p and :p
undefined, since the negation by default of an undefined literal is still undefined.
However, this is counter-intuitive: a defined value should prevail over an undefined
one.
In order to handle this case, we suppose that a system predicate undefined(X)
is available 5 , that succeeds if and only if the literal X is undefined. So we add the
following two rules to the definitions for p and :p:
According to these clauses, p( ~
X) is true when
X) is true and
X) is undefined,
and conversely.
Contradiction on Examples Theories are tested for consistency on all the literals
of the training set, so we should not have a conflict on them. However, in
some cases, it is useful to relax the consistency requirement and learn clauses that
cover a small amount of counter examples. This is advantageous when it would
be otherwise impossible to learn a definition for the concept, because no clause is
contained in the language bias that is consistent, or when an overspecific definition
would be learned, composed of very many specific clauses instead of a few general
ones. In such cases, the definitions of the positive and negative concepts may cover
examples of the opposite training set. These must then be considered exceptions,
which are then are due to abnormalities in the opposite concept.
Let us start with the case where some literals covered by a definition belong
to the opposite training set. We want of course to classify these according to
the classification given by the training set, by making such literals exceptions. To
handle exceptions to classification rules, we add a negative default literal of the form
not abnorm p ( ~
X)
X)) to the rule for p( ~
X)), to
express possible abnormalities arising from exceptions. Then, for every exception
p( ~ t), an individual fact of the form abnorm p ( ~ t) (resp. abnorm:p ( ~ t)) is asserted
so that the rule for p( ~
X)) does not cover the exception, while the
opposite definition still covers it. In this way, exceptions will figure in the model of
the theory with the correct truth value. The learned theory thus takes the form:
not abnorm p ( ~
not :p( ~
not abnorm:p ( ~
not p( ~
Abnormality literals have not been added to the rules for the undefined case because
a literal which is an exception is also an example, and so must be covered by its
respective definition; therefore it cannot be undefined.
Notice that if E example p( ~ t), then p( ~ t) is classified
false by the learned theory. A different behaviour would obtain by slightly changing
the form of learned rules in order to adopt, for atoms of the training set, one
classification as default and thus give preference to false (negative training set) or
true (positive training set)
Individual facts of the form abnorm p ( ~
used as examples for learning
a definition for abnorm p and abnorm:p , as in [25, 19]. In turn, exceptions to the
definitions of abnorm p and abnorm:p might be found and so on, thus leading to a
hierarchy of exceptions (for our hierarchical learning of exceptions, see [27]).
Example: Consider a domain containing entities a; b; c; d; e; f and suppose the
target concept is f lies. Let the background knowledge be:
bird(a) has wings(a)
jet(b) has wings(b)
angel(c) has wings(c) has limbs(c)
penguin(d) has wings(d) has limbs(d)
dog(e) has limbs(e)
cat(f) has limbs(f)
and let the training set be:
A possible learned theory is:
f lies(X) / f lies + (X); not abnormal flies (X); not :f lies(X)
f lies(X) / f lies
lies
abnormal flies (d)
where f lies (X) / has wings(X) and f lies(X) \Gamma / has limbs(X).
Figure
3. Coverage of definitions for opposite concepts
The example above and figure 3 show all the various cases for a literal when learning
in a three-valued setting. a and e are examples that are consistently covered by the
definitions. b and f are unseen literals on which there is no contradiction. c and d
are literals where there is contradiction, but c is classified as undefined whereas d
is considered as an exception to the positive definition and is classified as negative.
Identifying contradictions on unseen literals is useful in interactive theory revision,
where the system can ask an oracle to classify the literal(s) leading to contradiction,
and accordingly revise the least or most general solutions for p and for :p using a
theory revision system such as REVISE [7] or CLINT [12, 14]. Detecting uncovered
literals points to theory extension.
Extended logic programs can be used as well to represent
When one has to learn n disjoint classes, the training set contains a
number of facts for a number of predicates
i be a definition
learned by using, as positive examples, the literals in the training set classified as
belonging to p i and, as negative examples, all the literals for the other classes. Then
the following rules ensure consistency on unseen literals and on exceptions:
not abnormal p1 ( ~
not abnormal p2 ( ~
not abnormal pn ( ~
regardless of the algorithm used for learning the
6. An Algorithm for Learning Extended Logic Programs
The algorithm LIVE (Learning In a 3-Valued Environment) learns ELPs containing
non-deterministic rules for a concept and its opposite. The main procedure of the
algorithm is given below:
1. algorithm LIVE( inputs training sets,
2. B: background theory, outputs learned theory)
3.
4.
5. Obtain H by:
6. transforming H p , H:p into "non-deterministic" rules,
7. adding the clauses for the undefined case
8. output H
The algorithm calls a procedure LearnDefinition that, given a set of positive,
a set of negative examples and a background knowledge, returns a definition for
the positive concept, consisting of default rules, together with definitions for abnormality
literals if any. The procedure LearnDefinition is called twice, once for
the positive concept and once for the negative concept. When it is called for the
negative used as the positive training set and E + as the negative
one.
LearnDefinition first calls a procedure Learn(E learns a definition
H p for the target concept p. Learn consists of an ordinary ILP algorithm,
either bottom-up or top-down, modified to adopt the SLX interpreter for testing
the coverage of examples and to relax the consistency requirement of the solution.
The procedure thus returns a theory that may cover some opposite examples. These
opposite examples are then treated as exceptions, by adding a default literal to the
inconsistent rules and adding proper facts for the abnormality predicate. In partic-
ular, for each rule
covering some negative examples,
a new non-abnormality literal not abnormal r ( ~
X) is added to r and some facts for
abnormal r ( ~
are added to the theory. Examples for abnormal r are obtained from
examples for p by observing that, in order to cover an example p( ~ t) for p, the atom
abnormal r ( ~ t) must be false. Therefore, facts for abnormal r are obtained from the
r of opposite examples covered by the rule.
1. procedure LearnDefinition( inputs positive examples,
2. negative examples, B: background theory,
3. outputs learned theory)
4.
5. H := H p
6. for each rule r in H p do
7. Find the sets
r of positive and negative examples covered by r
8.
r is not empty then
9. Add the literal not abnormal r ( ~
X) to r
~ t)g from facts in
r by
11. transforming each p( ~
r into abnormal r ( ~ t)
13. endif
14. enfor
15. output H
Let us now discuss in more detail the algorithm that implements the Learn pro-
cedure. Depending on the generality of solution that we want to learn, different
algorithms must be employed: a top-down algorithm for learning the MGS, a
bottom-up algorithm for the LGS. In both cases, the algorithm must be such that,
if a consistent solution cannot be found, it returns a theory that covers the least
number of negative examples.
When learning with a top-down algorithm, the consistency necessity stopping criterion
must be relaxed to allow clauses that are inconsistent with a small number
of negative examples, e.g., by adopting one of the heuristic necessity stopping criteria
proposed in ILP to handle noise, such as the encoding length restriction [43]
of FOIL [43] or the significancy test of mFOIL [18]. In this way, we are able to
learn definitions of concepts with exceptions: when a clause must be specialized too
much in order to make it consistent, we prefer to transform it into a default rule and
consider the covered negative examples as exceptions. The simplest criterion that
can be adopted is to stop specializing the clause when no literal from the language
bias can be added that reduces the coverage of negative examples.
When learning with a bottom-up algorithm, we can learn using positive examples
only by using the rlgg operator: since the clause is not tested on negative examples,
it may cover some of them. This approach is realized by using the system GOLEM,
as in [25].
7. Implementation
In order to learn the most general solutions, a top-down ILP algorithm (cf. [31]) has
been integrated with the procedure SLX for testing the coverage. The specialization
loop of the top-down system consists of a beam search in the space of possible
clauses. At each step of the loop, the system removes the best clause from the
beam and generates refinements. They are then evaluated according to an accuracy
heuristic function, and their refinements covering at least one positive example are
added to the beam. The best clause found so far is also separately stored: this
clause is compared with each refinement and is replaced if the refinement is better.
The specialization loop stops when either the best clause in the beam is consistent
or the beam becomes empty. Then the system returns the best clause found so far.
The beam may become empty before a consistent clause is found and in this case
the system will return an inconsistent clause.
In order to find least general solutions, the GOLEM [37] system is employed. The
finite well-founded model is computed, through SLX, and it is transformed by replacing
literals of the form :A with new predicate symbols of the form neg A. Then
GOLEM is called with the computed model as background knowledge. The output
of GOLEM is then parsed in order to extract the clauses generated by rlgg before
they are post-processed by dropping literals. Thus, the clauses that are extracted
belong to the least general solution. In fact, they are obtained by randomly picking
couples of examples, computing their rlgg and choosing the consistent one that
covers the biggest number of positive examples. This clause is further generalized
by choosing randomly new positive examples and computing the rlgg of the previously
generated clause and each of the examples. The consistent generalization
that covers more examples is chosen and further generalized until the clause starts
covering some negative examples. An inverse model transformation is then applied
to the rules thus obtained by substituting each literal of the form neg A with the
literal :A.
LIVE was implemented in XSB Prolog [45] and the code of the system can be
found at :
http://www-lia.deis.unibo.it/Software/LIVE/.
8. Classification Accuracy
In this section, we compare the accuracy that can be obtained by means of a two-valued
definition of the target concept with the one that can be obtained by means
of a three-valued definition.
The accuracy of a two-valued definition over a set of testing examples is defined
as
number of examples correctly classified by the theory
total number of testing examples
The number of examples correctly classified is given by the number of positive
examples covered by the learned theory plus the number of negative examples not
covered by the theory. If Np is the number of positive examples covered by the
learned definition, Nn the number of negative examples covered by the definition,
Nptot the total number of positive examples and Nntot the total number of negative
examples, then the accuracy is given by:
Nptot +Nntot
When we learn in a three value setting a definition for a target concept and its
opposite, we have to consider a different notion of accuracy. In this case, some atoms
(positive or negative) in the testing set will be classified as undefined. Undefined
atoms are covered by both the definition learned for the positive concept and that
learned for the opposite one. Whichever is the right classification of the atom in
the test set, it is erroneously classified in the learned three-valued theory, but not
so erroneously as if it was covered by the opposite definition only. This explains
the weight assigned to undefined atoms (i.e., 0.5) in the new, generalized, notion of
accuracy:
number of examples correctly classified by the theory
total number of testing examples
+0.5 \Theta
number of examples classified as unknown
total number of testing examples
In order to get a formula to calculate the accuracy, we first define a number of
figures that are illustrated in figure 4:
ffl Npp is the number of positive examples covered by the positive definition only,
ffl Npn is the number of positive examples covered by the negative definition only,
ffl Npu is the number of positive examples covered by both definitions (classified
as undefined),
ffl Nnn is the number of negative examples covered by the negative definition only,
ffl Nnp is the number of negative examples covered by the positive definition only,
ffl Nnu is the number of negative examples covered by both definitions (classified
as undefined).
The accuracy for the three-valued case can thus be defined as follows:
Nptot +Nntot
It is interesting to compare this notion of accuracy with that obtained by testing
the theory in a two-valued way. In that case the accuracy would be given by:
Nptot +Nntot
We are interested in situations where the accuracy for the three-valued case is higher
than the one for the two-valued case, i.e., those for which Accuracy 3 ? Accuracy 2 .
By rewriting this inequation in terms of the figures above, we get:
This inequation can be rewritten as:
Figure
4. Sets of examples for evaluating the accuracy of a three-valued hypothesis
where the expression Nntot represents the number of negative examples
not covered by any definition (call it Nn not covered). Therefore, the accuracy
that results from testing the theory in a three-valued way improves the two-valued
one when most of the negative examples are covered by any of the two definitions,
the number of negative examples on which there is contradiction is particularly
high, and the number of positive examples on which there is contradiction is low.
When there is no overlap between the two definitions, and no undefinedness, the
accuracy is the same.
9. Related Work
The adoption of negation in learning has been investigated by many authors. Many
propositional learning systems learn a definition for both the concept and its op-
posite. For example, systems that learn decision trees, as c4.5 [42], or decision
rules, as the AQ family of systems [33], are able to solve the problem of learning a
definition for n classes, that generalizes the problem of learning a concept and its
opposite. However, in most cases the definitions learned are assumed to cover the
whole universe of discourse: no undefined classification is produced, any instance is
always classified as belonging to one of the classes. Instead, we classify as undefined
the instances for which the learned definitions do not give a unanimous response.
When learning multiple concepts, it may be the case that the descriptions learned
are overlapping. We have considered this case as non-desirable: this is reasonable
when learning a concept and its opposite but it may not be the case when learning
more than two concepts (see [17]). As it has been pointed out by [34], in some cases
it is useful to produce more than one classification for an instance: for example if
a patient has two diseases, his symptoms should satisfy the descriptions of both
diseases. A subject for future work will be to consider classes of paraconsistent logic
programs where the overlap of definitions for p and :p (and, in general, multiple
concepts) is allowed.
The problems raised by negation and uncertainty in concept-learning, and Inductive
Logic Programming in particular, were pointed out in some previous work
(e.g., [4, 13, 10]). For concept learning, the use of the CWA for target predicates
is no longer acceptable because it does not allow to distinguish between what is
false and what is undefined. De Raedt and Bruynooghe [13] proposed to use a
three-valued logic (later on formally defined in [10]) and an explicit definition of
the negated concept in concept learning. This technique has been integrated within
the CLINT system, an interactive concept-learner. In the resulting system, both a
positive and a negative definition are learned for a concept (predicate) p, stating,
respectively, the conditions under which p is true and those under which it is false.
The definitions are learned so that they do not produce an inconsistency on the
examples. Furthermore, CLINT does not produce inconsistencies also on unseen
examples because of its constraint handling mechanism, since it would assert the
constraint false , and take care that it is never violated. Distinctly from
this system, we make sure that the two definitions do not produce inconsistency
on unseen atoms by making learned rules non-deterministic. This way, we are able
to learn definitions for exceptions to both concepts so that the information about
contradiction is still available. Another contradistinction is that we cope with and
employ simultaneously two kinds of negation, the explicit one, to state what is false,
and the default (defeasible) one, to state what can be assumed false.
The system LELP (Learning Extended Logic Programs) [25] learns ELPs under
answer-set semantics. LELP is able to learn non-deterministic default rules with a
hierarchy of exceptions. Hierarchical learning of exceptions can be easily introduced
in our system (see [27]). From the viewpoint of the learning problems that the
two algorithms can solve, they are equivalent when the background is a stratified
extended logic program, because then our and their semantics coincide. All the
examples shown in [25] are stratified and therefore they can be learned by our
algorithm and, viceversa, example in section 5 can be learned by LELP. However,
when the background is a non-stratified extended logic program, the adoption of a
well-founded semantics gives a number of advantages with respect to the answer-set
semantics. For non-stratified background theories, answer-sets semantics does not
enjoy the structural property of relevance [15], like our WFSX does, and so they
cannot employ any top-down proof procedure. Furthermore, answer-set semantics
is not cumulative [15], i.e., if you add a lemma then the semantics can change, and
thus the improvement in efficiency given by tabling cannot be obtained. Moreover,
by means of WFSX, we have introduced a method to choose one concept when the
other is undefined which they cannot replicate because in the answer-set semantics
one has to compute eventually all answer-sets to find out if a literal is undefined.
The structure of the two algorithms is similar: LELP first generates candidate rules
from a concept using an ordinary ILP framework. Then exceptions are identified
(as covered examples of the opposite set) and rules specialized through negation as
default and abnormality literals, which are then assumed to prevent the coverage
of exceptions. These assumptions can be, in their turn, generalized to generate
hierarchical default rules. One difference between us and [25] is in the level of
generality of the definitions we can learn. LELP learns a definition for a concept
only from positive examples of that concept and therefore it can only employ a
bottom-up ILP technique and learn the LGS. Instead, we can choose whether to
adopt a bottom-up or a top-down algorithm, and we can learn theories of different
generality for different target concepts by integrating, in a declarative way, the
learned definitions into a single ELP. Another difference consists in that LELP
learns a definition only for the concept that has the highest number of examples
in the training set. It learns both positive and negative concepts only when the
number of positive examples is close to that of negative ones (in
while we always learn both concepts.
Finally, many works have considered multi-strategy learners or multi-source learn-
ers. A multi-strategy learner combines learning strategies to produce effective hypotheses
(see [26]). A multi-source learner implements an algorithm for integrating
knowledge produced by the separate learners. Multi-strategy learning has been
adopted, for instance, for the improvement of classification accuracy [17], and to
equip an autonomous agent with capabilities to survive in an hostile environment
[11].
Our approach considers two separate concept-based learners, in order to learn
a definition for a concept and its opposite. Multiple (opposite) target concepts
constitute part of the learned knowledge base, and each learning element is able
to adopt a bottom-up or a top-down strategy in learning rules. This can be easily
generalized to learn definitions for n disjoint classes of concepts or for multiple agent
learning (see our [28]). Very often, the hypothesis can be more general than what is
required. The second step of our approach, devoted to the application of strategies
for eliminating learned contradictions, can be seen as a multi-source learner [26] or
a meta-level one [6], where the learned definitions are combined to obtain a non-contradictory
extended logic program. ELPs are used to specify combinations of
strategies in a declarative way, and to recover, in the the process, the consistency
of the learned theory.
10. Concluding Highlights
The two-valued setting that has been adopted in most work on ILP and Inductive
Concept Learning in general is not sufficient in many cases where we need to represent
real world data. This is for example the case of an agent that has to learn
the effect of the actions it can perform on the domain by performing experiments.
Such an agent needs to learn a definition for allowed actions, forbidden actions
and actions with an unknown outcome, and therefore it needs to learn in a richer
three-valued setting.
In order to achieve that in ILP, the class of extended logic programs under the
well-founded semantics with explicit negation (WFSX ) is adopted by us as the representation
language. This language allows two kinds of negation, default negation
plus a second form of negation called explicit, that is mustered in order to explicitly
represent negative information. Adopting extended logic programs in ILP
prosecutes the general trend in Machine Learning of extending the representation
language in order to overcome the recognized limitations of existing systems.
The programs that are learned will contain a definition for the concept and its
opposite, where the opposite concept is expressed by means of explicit negation.
Standard ILP techniques can be adopted to separately learn the definitions for the
concept and its opposite. Depending on the adopted technique, one can learn the
most general or the least general definition.
The two definitions learned may overlap and the inconsistency is resolved in a different
way for atoms in the training set and for unseen atoms: atoms in the training
set are considered exceptions, while unseen atoms are considered unknown. The
different behaviour is obtained by employing negation by default in the definitions:
default abnormality literals are used in order to consider exceptions to rules, while
non-deterministic rules are used in order to obtain an unknown value for unseen
atoms. We have shown how the adoption of extended logic programs in ILP allows
to tackle both learning in a three-valued setting and specify the combination of
strategies in a declarative way, also coping with contradiction and exceptions in the
process.
The system LIVE (Learning in a three-Valued Environment) has been developed
to implement the above mentioned techniques. In particular, the system learns a
definition for both the concept and its opposite and is able to identify exceptions and
treat them through default negation. The system is parametric in the procedure
used for learning each definition: it can adopt either a top-down algorithm, using
beam-search and a heuristic necessity stopping criterion, or a bottom-up algorithm,
that exploits the GOLEM system.
Notes
1. For definitions and foundations of LP, refer to [16]. For a recent state-of-the art of LP extensions
for non-monotonic reasoning, refer to [2].
2. For the most advanced, incorporating more recent theoretical developments, see the XSB
system at: htpp://www.cs.sunysb.edu/~sbprolog/xsb-page.html.
3. Notice that in the formula not lover(H; L) variable H is universally quantified, whereas L is
existentially quantified.
4. By non-contradictory program we mean a program which admits at least one WFSX model.
5. The undefined predicate can be implemented through negation NOT under CWA (NOT P
means that P is false whereas not means that P is false or undefined), i.e., undefined(P ) /
--R
Reasoning with Logic Programming
"Classical"
Logic programming and knowledge representation.
REVISE: An extended logic programming system for revising knowledge bases.
Abduction on 3
A survey on paraconsistent semantics for extended logic programs.
Interactive Theory Revision: An Inductive Logic Programming Approach.
Learning to survive.
Towards friendly concept-learners
On negation and three-valued logic in interactive concept learning
Interactive concept learning and constructive induction by analogy.
A classification-theory of semantics of normal logic programs: I
Prolegomena to logic programming and non-monotonic reasoning
Multistrategy learning: An analytical approach.
Cooperation of abduction and induction in logic programming.
The stable model semantics for logic programming.
Logic programs with classical negation.
Explicitly biased generalization.
Learning active classifiers.
Learning abductive and nonmonotonic logic programs.
Learning extended logic programs.
A framework for multistrategy learning.
Learning in a three-valued setting
Agents learning in a three-valued setting
Strategies in combined learning via logic programs.
A tool for efficient induction of recursive programs.
Generalizing updates: from models to programs.
Discovery classification rules using variable-valued logic system VL1
A theory and methodology of inductive learning.
Inverse entailment and Progol.
Machine invention of first-order predicates by inverting resolution
Efficient induction of logic programs.
Reducing misclassification costs.
Well founded semantics for logic programs with explicit negation.
A note on inductive generalization.
Analysis and visualization of classifier performance: Comparison under imprecise class and cost distribution.
Learning logical definitions from relations.
On closed-word data bases
The XSB Programmer's Manual Version 1.7.
The well-founded semantics for general logic programs
--TR
Explicitly biased generalization
Logic programs with classical negation
The well-founded semantics for general logic programs
Sub-unification
Interactive Concept-Learning and Constructive Induction by Analogy
Well founded semantics for logic programs with explicit negation
C4.5: programs for machine learning
SLXMYAMPERSANDmdash;a top-down derivation procedure for programs with explicit negation
Interactive theory revision
Strategies in Combined Learning via Logic Programs
A survey of paraconsistent semantics for logic programs
Reasoning with Logic Programming
Inductive Logic Programming
MYAMPERSANDlsquo;ClassicalMYAMPERSANDrsquo; Negation in Nonmonotonic Reasoning and Logic Programming
Learning Logical Definitions from Relations
Generalizing Updates
Abduction over 3-Valued Extended Logic Programs
Prolegomena to Logic Programming for Non-monotonic Reasoning
--CTR
Chongbing Liu , Enrico Pontelli, Nonmonotonic inductive logic programming by instance patterns, Proceedings of the 9th ACM SIGPLAN international symposium on Principles and practice of declarative programming, July 14-16, 2007, Wroclaw, Poland
Chiaki Sakama, Induction from answer sets in nonmonotonic logic programs, ACM Transactions on Computational Logic (TOCL), v.6 n.2, p.203-231, April 2005
Evelina Lamma , Fabrizio Riguzzi , Lus Moniz Pereira, Strategies in Combined Learning via Logic Programs, Machine Learning, v.38 n.1-2, p.63-87, Jan.&slash;Feb. 2000
Thomas Eiter , Michael Fink , Giuliana Sabbatini , Hans Tompits, Using methods of declarative logic programming for intelligent information agents, Theory and Practice of Logic Programming, v.2 n.6, p.645-709, November 2002 | inductive logic programming;multi-strategy learning;explicit negation;contradiction handling;non-monotonic learning |
338432 | Relations Between Regularization and Diffusion Filtering. | Regularization may be regarded as diffusion filtering with an discretization where one single step is used. Thus, iterated regularization with small regularization parameters approximates a diffusion process. The goal of this paper is to analyse relations between noniterated and iterated regularization and diffusion filtering in image processing. In the linear regularization framework, we show that with iterated Tikhonov regularization noise can be better handled than with noniterated. In the nonlinear framework, two filtering strategies are considered: the total variation regularization technique and the diffusion filter technique of Perona and Malik. It is shown that the Perona-Malik equation decreases the total variation during its evolution. While noniterated and iterated total variation regularization is well-posed, one cannot expect to find a minimizing sequence which converges to a minimizer of the corresponding energy functional for the PeronaMalik filter. To overcome this shortcoming, a novel regularization technique of the PeronaMalik process is presented that allows to construct a weakly lower semi-continuous energy functional. In analogy to recently derived results for a well-posed class of regularized PeronaMalik filters, we introduce Lyapunov functionals and convergence results for regularization methods. Experiments on real-world images illustrate that iterated linear regularization performs better than noniterated, while no significant differences between noniterated and iterated total variation regularization have been observed. | Introduction
Image restoration is among other topics such as optic flow, stereo, and shape-from-
shading one of the classical inverse problems in image processing and computer vision
[4]. The inverse problem of image restoration consists in recovering information about
the original image from incomplete or degraded data. Diffusion filtering has become a
popular and well-founded tool for restoration in the image processing community [25, 50],
while mathematicians have unified most techniques to treat inverse problems under the
theory of regularization methods [14, 19, 30, 44]. Therefore it is natural to investigate
relations between both approaches, as this may lead to a deeper understanding and a
synthesis of these techniques. This is the goal of the present paper.
We can base our research on several previous results. In the linear setting, Torre
and Poggio [45] emphasized that differentiation is ill-posed in the sense of Hadamard,
and applying suitable regularization strategies approximates linear diffusion filtering or -
equivalently - Gaussian convolution. Much of the linear scale-space literature is based on
the regularization properties of convolutions with Gaussians. In particular, differential
geometric image analysis is performed by replacing derivatives by Gaussian-smoothed
derivatives; see e.g. [16, 29, 42] and the references therein. In a very nice work, Nielsen
et al. [31] derived linear diffusion filtering axiomatically from Tikhonov regularization,
where the stabilizer consists of a sum of squared derivatives up to infinite order.
In the nonlinear diffusion framework, natural relations between biased diffusion and
regularization theory exist via the Euler equation for the regularization functional. This
Euler equation can be regarded as the steady-state of a suitable nonlinear diffusion
process with a bias term [34, 41, 9]. The regularization parameter and the diffusion
time can be identified if one regards regularization as time-discrete diffusion filtering
with a single implicit time step [43, 39]. A popular specific energy functional arises
from unconstrained total variation denoising [1, 8, 6]. Constrained total variation also
leads to a nonlinear diffusion process with a bias term using a time-dependent Lagrange
multiplier [38].
In spite of these numerous relations, several topics have not been addressed so far in
the literature:
ffl A comparison of the restoration properties of both approaches: Since regularization
corresponds to time-discrete diffusion filtering with a single time step, it follows
that iterated regularization with a small regularization parameter gives a better
approximation to diffusion filtering. An investigation whether iterated regularization
is better than noniterated leads therefore to a comparison between regularization
and diffusion filtering.
ffl Energy formulations for stabilized Perona-Malik processes: The Perona-Malik
filter is the oldest nonlinear diffusion filter [36]. Its ill-posedness has triggered
many researchers to introduce regularizations which have shown their use for image
restoration. However, no regularization has been found which can be linked to the
minimization of an appropriate energy functional.
ffl Lyapunov functionals for regularization: The smoothing and information-reducing
properties of diffusion filters can be described by Lyapunov functionals such as
decreasing L p norms, decreasing even central moments, or increasing entropy [50].
They constitute important properties for regarding diffusion filters as scale-spaces.
A corresponding scale-space interpretation of regularization methods where the
regularization parameter serves as scale parameter has been missing so far.
These topics will be discussed in the present paper. It is organized as follows. Section
2 explains the relations between variational formulations of diffusion processes and regularization
strategies. In Section 3 we first discuss the noise propagation for noniterated
and iterated Tikhonov regularization for linear problems. In the nonlinear framework,
well-posedness results for total variation regularization are reviewed and it is explained
why one cannot expect to establish well-posedness for the Perona-Malik filter. We will
argue that, if the Perona-Malik filter admits a smooth solution, however, then it will
be total variation reducing. A novel regularization will be introduced which allows to
construct a corresponding energy functional. Section 4 establishes Lyapunov functionals
for regularization methods which are in accordance with those for diffusion filtering. This
leads to a scale-space interpretation for linear and nonlinear regularization. In Section
5 we shall present some experiments with noisy real-world images, which compare the
restoration properties of noniterated and iterated regularization in the linear setting and
in the nonlinear total variation framework. Moreover, the novel Perona-Malik regularization
is juxtaposed to the regularization by Catt'e et al. [5]. The paper will be
concluded with a summary in Section 6.
Variational formulations of diffusion processes and
the connection to regularization methods
We consider a general diffusion process of the
on\Omega \Theta (0; 1[
Here g is a smooth function satisfying certain properties which will be explained in the
course of the
paper;\Omega ' R d is a bounded domain with piecewise Lipschitzian boundary
with unit normal vector n, and f ffi is a degraded version of the original image f := f
For the numerical solution of (2.1) one can use explicit or implicit or semi-implicit difference
schemes with respect to t.
The implicit scheme reads as
Here h ? 0 denotes the step-size in t-direction of the implicit difference scheme.
In the following we assume that g is measurable on [0; 1[ and there exists a differentiable
function - g on [0; 1) which satisfies -
g. Then the minimizer of the functional (for
given u(x; t))
Z
\Omega
satisfies (2.2) at time t + h. If the functional T is convex, then a minimizer of T is
uniquely characterized by the solution of the equation (2.2) with homogeneous Neumann
boundary conditions.
T (u) is a typical regularization functional consisting of the approximation functional
and the stabilizing functional
The weight h is called regularization
parameter. The case -
called regularization.
In the next section we summarize some results on regularization and diffusion filtering
and compare the theoretical results developed in both theories.
3 A survey on diffusion filtering and regularization
We have seen that each time step for the solution of the diffusion process (2.1) with
an implicit, t-discrete scheme is equivalent to the calculation of the minimizer of the
regularization functional (2.3). The numerical solution of the diffusion process with
an implicit, t-discrete iteration scheme is therefore equivalent to iterated regularization
where on has to minimize iteratively the set of functionals
Z
\Omega
Here u n is a minimizer of the functional T . If the functionals
T n are convex, then the minimizer of (3.1) denoted by u n is the approximation of the
solution of the diffusion process with an implicit, t-discrete method at time t
In the following we refer to iterated regularization if h That corresponds
to the solution of the diffusion process with an implicit, t-discrete method using a fixed
time step size
If the regularization parameters h n are adaptively chosen (this corresponds to the situation
that the time discretization in the diffusion process is changed adaptively), then the
method is called nonstationary regularization. For some recent results on nonstationary
Tikhonov regularization we refer to Hanke and Groetsch [24]; however, their results do
not fit directly into the framework of this paper. They deal with regularization methods
for the stable solution of operator equations
where I is a linear bounded operator from a Hilbert space X into a Hilbert space Y , and
they use nonstationary Tikhonov regularization
for the stable solution of the operator equation (3.2).
3.1 propagation of Tikhonov regularization with linear
unbounded operators
In this subsection we consider the problem of computing values of an unbounded operator
L. We will always denote by densely defined unbounded
linear operator between two Hilbert spaces H 1 and H 2 . A typical example is
The problem of computing values ill-posed in the sense
that small perturbations in f 0 may lead to data f ffi satisfying
but f
2 D(L), or even if f may happen that Lf ffi 6! Lf 0 as
the operator L is unbounded. Morozov has studied a stable method for approximating
the value Lf 0 when only approximate data f ffi is available [30]. This method takes as an
approximation to the vector y ffi
h minimizes the functional
over D(L).
The functional is strictly convex and therefore if D(L) is nonempty and convex there
exists a unique minimizer of the functional T TIK (u). Thus the method is well-defined.
For more background on the stable evaluation of unbounded operators we refer to [20].
Let then the sequence fu n g n-1 of minimizers of the family of
optimization problems
are identical to the semi-discrete approximations of the differential equation (2.1) at time
This shows
Methods for evaluating unbounded operators can be used for diffusion filtering
and vice versa. However the motivations differ: For evaluating unbounded
operators we solve the optimization and evaluate in a further step the unbounded
operator. In diffusion filtering we "only" have to solve the optimization
problem.
In the following we compare the error propagation in Tikhonov regularization with regularization
parameter h and the error propagation in iterated Tikhonov regularization of
order N with regularization parameter h=N . This corresponds to making an implicit,
t-discrete ansatz for a diffusion process with one step h and an implicit, t-discrete ansatz
with N steps of step h=N , respectively.
Tikhonov regularization with regularization parameter h reads as follows
where L is the adjoint operator to L (see e.g. [47] for more details). Tikhonov regularization
of order N with regularization parameter h=N reads as follows
I
Let L L be an unbounded operator with spectral values
such that - n !1 as n !1. Then
I
I
I
I
denotes the propagated error of the initial data f ffi , which remains
in uN - this corresponds to the error propagation in diffusion filtering with an implicit,
t-discrete method.
be the spectral family according to the operator L L. Then it follows that [47]
I
Z 1/
Using /
N!/
we get that
I
In noniterated Tikhonov regularization the error propagation is
For large values of - (i.e., for highly oscillating noise) the term (1 in (3.7) is
significantly larger than the term
in (3.6).
This shows that noise propagation is handled more efficiently by iterated Tikhonov regularization
than by Tikhonov regularization.
Above we analyzed the error of the (iterated) Tikhonov regularized solutions and not the
error in evaluating L at the Tikhonov regularized solutions. We emphasize that the less
noise is contained in a data set the better the operator L can be evaluated. Therefore
we conclude that the operator L can be evaluated more accurately with the method of
iterated Tikhonov regularization than with noniterated Tikhonov regularization. This
will be confirmed by the experiments in Section 5.
3.2 Well-posedness of regularization with nonlinear unbounded
operators
In this subsection we discuss some theoretical results on regularization with nonlinear
unbounded operators.
3.2.1 Well-posedness and convergence for total variation regularization
Total variation regularization goes back to Rudin, Osher and Fatemi [38] and has been
further analysed by many others, e.g. [1, 7, 6, 8, 12, 13, 27, 28, 43, 40, 46]. In the
unconstrained formulation of this method the data f 0 is approximated by the minimizer
of the functional over
the space of all functions with finite total variation norm
where TV(u) :=
R
\Omega jruj and
Z
\Omega
Z
\Omega
This expression extends the usual definition of the total variation for smooth functions
to functions with jumps [22].
It is easy to see that a smooth minimizer of the functional T
Acar and Vogel [1] proved the following results concerning existence of a minimizer of
(3.8) and concerning stability and convergence of the minimizers:
Theorem 3.1 (Existence of a minimizer) Let f
minimizer u h 2 TV(\Omega\Gamma of (3.8) exists and is unique.
Theorem 3.2 (Stability) Let f
with respect to the L p -norm (1 -
is the minimizer of (3.8) and
is the minimizer of (3.8) where f ffi is replaced by f 0 .
Theorem 3.3 (Convergence) Let f
Then for h := h(ffi) satisfying
with respect to the L p -norm (1 -
It is evident that analogous results to Theorem 3.1, Theorem 3.2 and 3.3 also hold for
the minimizers of the iterated total variation regularization which consists of minimizing
a sequence of functionals
Z
\Omega
denotes the minimizer of the functional
This regularization technique corresponds to the implicit, t-discrete approximation of
the diffusion process (2.1) with -
x.
3.2.2 The Perona-Malik filter
In the Perona-Malik filter [36] we have
1+s and -
Perona-Malik regularization minimizes the family of functionals
Z
\Omega
The functionals T n
PM are not convex and therefore we cannot conclude that the minimizer
of (3.11) (it it exists) satisfies the first order optimality conditionh
with homogeneous Neumann boundary data.
In the following we comment on some aspects of the Perona-Malik regularization tech-
nique. For the definitions of the Sobolev spaces W l;p and the notion of weak lower
semi-continuity we refer to [2].
1. Neumann boundary conditions:
Let\Omega be a domain with smooth boundary @
Using trace theorems (see e.g. [2]) it follows that the Neumann boundary data are
well-defined in L 2 (@
\Omega\Gamma for any function in W 3;2(\Omega\Gamma4 Suppose we could prove that
there exists a minimizer of the functional T n
PM , then this minimizer must satisfy
Z
\Omega
Elementary calculations show that any function u
(3.13). Therefore we cannot deduce from (3.13) that the minimizer is in any
Sobolev space W 1;p
1). Consequently, there exists no theoretical result
that the Neumann boundary conditions are well-defined.
2. Existence of a minimizer of the functional T n
convex, and therefore the functional T n
PM (u) is not weakly lower semi-continuous
on W 1;p
Therefore, there exists a sequence u k 2 W 1;p
(\Omega\Gamma with u k * u in W
1;p(\Omega\Gamma4 but
Consequently, we cannot expect that a minimizing sequence converges (in W
to a minimizer of the functional T n
PM . Thus the solution of the Perona-Malik
regularization technique is ill-posed on W
The diffusion process associated with the Perona-Malik regularization technique is
The Perona-Malik diffusion filtering technique can be split up in a natural way into a
forward and a backward diffusion process:
Here
Both functions a and b are non-negative. In general the solution of a backward diffusion
equation is severely ill-posed (see e.g. [14]). We argue below that this nonlinear backward
diffusion is well-posed with respect to appropriate norms. In fact we argue that the
backward diffusion equation
satisfies
The intuitive reason for the validity of this is the following: Let v 2 C
2(\Omega \Theta [0; T ]) then
Using (3.17), (3.15), and integration by parts it follows that
R
R\Omega rv
R\Omega r:
\Deltav
2(\Omega \Theta [0; T ]) then the right hand side tends to zero as fi ! 0. These arguments
indicate that
Z
\Omega
Consequently the total variation of v(:; t) does not change in the course of the evolutionary
process (3.17). Indeed, (3.15) may be regarded as a total variation preserving
shock filter in the sense of Osher and Rudin [35].
The diffusion process
is a forward diffusion process which decreases the total variation during the evolution.
In summary we have argued that the Perona-Malik diffusion equation decreases the total
variation during the evolutionary process.
3.2.3 A regularized Perona-Malik filter
Although the ill-posedness of the Perona-Malik filter can be handled by applying regularizing
finite difference discretizations [51], it would be desirable to have a regularization
which does not depend on discretization effects. In this subsection we study a regularized
Perona-Malik filter
Z
\Omega
where L fl is linear and compact from L
(\Omega ). The applications which we have
in mind include the case that L fl is a convolution operator with a smooth kernel.
In the following we prove that the functional T n
R-PM attains a minimium:
Theorem 3.4 The functional T n
R-PM is weakly lower semi continuous on L
.
Proof: Let fu s : s 2 Ng be a sequence in L
which satisfies
Then fu s g has a weakly convergent subsequence (which is again denoted by fu s g) with
limit u. Since L fl is compact from L
(\Omega ) the sequence
converges uniformly to In particular, we have
Z
\Omega
Z
\Omega
Using the weak lower semi-continuity of the norm k:k L
it follows that the functional
R-PM is weakly lower semi-continuous. q.e.d.
The minimizer of the regularized Perona-Malik functional satisfies
The corresponding nonlinear diffusion process associated with this regularization technique
is
Regularized Perona-Malik filters have been considered in the literature before [3, 5, 32,
48, 50]. Catt'e et al. [5] for instance investigated the nonlinear diffusion process
This technique (as well as other previous regularizations) does not have a corresponding
formulation as an optimization problem. The differences between (3.20) and (3.21) will
be explained in Section 5.
4 Lyapunov functionals for regularization methods
play an important role in continuous diffusion filtering (see [49,
50]). In order to introduce Lyapunov functionals of regularization methods, we first give
a survey on Lyapunov functionals in diffusion filtering. We consider the diffusion process
(here and in the
will be a domain with piecewise smooth boundary)
on\Omega \Theta (0; T )
We assume that the following assumptions hold:
(\Omega\Gamma3 with a := ess inf
x2\Omega f and b := ess sup
x2\Omega f .
2. L fl is a compact operator from L
into C p (\Omega\Gamma for any p 2 N .
3. T ? 0:
4. For all w 2 L
on\Omega , there exists a positive lower bound
-(K) for g.
The regularizing operator L fl may be skipped in (4.1), if one assumes that -
convex from R d to R. Moreover, it is also possible to generalize (4.1) to the anisotropic
case where the diffusivity g is replaced by a diffusion tensor [50].
Under the preceding assumptions it can be shown that (4.1) is well-posed (see [5, 50]):
Theorem 4.1 The equation (4.1) has a unique solution u(x; t) which satisfies
Moreover,
(\Omega \Theta [0; T
The solution fulfills the extremum principle
a - u(x;
on\Omega \Theta (0; T ]:
For fixed t the solution depends continuously on f with respect to k:k L 2
.
This diffusion process leads to the following class of Lyapunov functionals [50]:
Theorem 4.2 Suppose that u is a solution of (4.1) and that assumptions 1 - 4 are
satisfied. Then the following properties hold
(a) (Lyapunov functionals) For all r 2 C 2 [a; b] with r 00 - 0 on [a; b], the function
Z
\Omega
is a Lyapunov functional:
1.
Z
\Omega
2.
Moreover, if r 00 ? 0 on [a; b], then V is a strict Lyapunov functional:
3. only if
a.e.
4. If t ? 0, then V 0 only if
on\Omega .
5.
on\Omega and
on\Omega \Theta (0; T ].
(b) (Convergence)
1. lim t!1
2.
If\Omega ' R, then the convergence lim t!1 u(x;
In the sequel we introduce Lyapunov functionals of regularization methods.
In the beginning of this section we discuss existence and uniqueness of the minimizer of
the regularization functional in H
Z
\Omega
Lemma 4.3
Let\Omega ' R d , d - 1. Moreover, let - g satisfy:
g(:) is in C 0 (K) for any compact K ' [0; 1[
is convex from R d to R :
Moreover, we assume that there exists a constant c ? 0 such that
Then the minimizer of (4.6) exists and is unique in H
.
Proof: By virtue of (4.9) it follows that
Z
\Omega
Z
\Omega
Suppose now that u n is a sequence such that I(u n ) converges to the minimum of the
functional I(:) in H 1
(\Omega\Gamma7 From (4.10) it follows that u n has a weakly convergent subsequence
in H 1
(\Omega\Gamma3 which we also denote by u n ; the weak limit will be denoted by u . Since
the functional
lower semi continuous in H
(see
[11, 10]), and thus Z
\Omega
Z
\Omega
Thanks to the the Sobolev embedding theorem (see [2]) it follows that the functional
is weakly lower semi continuous on H
Consequently
and thus u is a minimizer of I in H
1(\Omega\Gamma5 Suppose now that u 1 and u 2 are two minimizers
of the functional I. Then, from the optimality condition it follows that
(4.
And thus the minimizer of I is unique. q.e.d
The minimizer of (4.6) will be denoted by u h in the remaining of this paper.
In the following we establish the average grey level invariance of regularization methods.
Theorem 4.4 Let (4.7), (4.8), (4.9) hold. Then for different values of h the minimizers
of (4.6) are grey-level invariant, i.e., for h ? 0
Z
\Omega
Z
\Omega
Proof: Elementary calculations show that the minimizer of (4.6) satisfies for all v 2
Taking the second term vanishes and the assertion follows. q.e.d.
In the following we establish some basic results on regularization techniques. As we will
show the proofs of the following results can be carried out following the ideas of the
corresponding results in the book of Morozov [30]. However Morozov's results can not
be applied directly since they are only applicable in the case that - g(jxj 2 which is
not sufficient for the presentation of this paper. Later these results are used to establish
a family of Lyapunov functionals for regularization methods.
Lemma 4.5 Let (4.7), (4.8), (4.9) hold. Then for any h ? 0
and for
Proof: If - g(j:j 2 ) is convex, then g(jsj 2 )s is monotone (see e.g. [11]), i.e., for all s; t 2 R d
1. First we consider the case h ? 0: from (4.13) it follows by using the notation
that
Thus using the Cauchy-Schwarz inequality and the identity (4.14) it follows that
which shows the continuity of u h .
2. If There exists a sequence f n 2 H
Consequently
for any h ? 0 it follows from the definition of a minimum of the Tikhonov-like
functional it follows that
Z
\Omega
Consequently by taking the limit h ! 0 it follows that for any n 2 N
lim
which shows the assertion.
q.e.d.
In the following we present some monotonicity results for the regularized solutions.
Lemma 4.5 implies that we can set u causing any confusion.
Lemma 4.6 Let (4.7), (4.8), (4.9) hold. Then
monotonically decreasing
in h and ku monotonically increasing in h.
Proof: Using the definition of the regularized solution it follows
Z
\Omega
Z
\Omega
Z
\Omega
Z
\Omega
'Z
\Omega
Z
\Omega
and therefore, for t ? 0,
Z
\Omega
Z
\Omega
This shows the monotonicity of the functional
Using very similar arguments
it can be shown that ku
(\Omega\Gamma is monotonically increasing in h. q.e.d.
In the following we analyze the behaviour of the functionals
Lemma 4.7 Let (4.7), (4.8), (4.9) hold. Then, for h ! 1 the regularized solution
converges (with respect to the L 2 -norm) to the solution of the optimization problem
under the constraint Z
\Omega
Proof: The proof is similar to the proof in the book of Morozov [30] (p.35) and thus
omitted. q.e.d.
In the following lemma we establish the boundedness of the regularized solution. For
the proof of this result we utilize Stampacchia's Lemma (see [23]).
Lemma 4.8 Let B be an open domain, u a function in H 1 (B) and a a real number.
Then u
Z
Z
We are using this result to prove that each regularized solution lies between the minimal
and maximal value of the data f .
Lemma 4.9 Let (4.7), (4.8), (4.9) hold. Moreover, let
- g be monotone in [0; 1[:
, then for any h ? 0 the regularized solution satisfies
a := ess
Proof: We verify that the maximum of u h is less than b. The corresponding assertion
for the minimum values can be proven analogously. Let u b
g, then from
Lemma 4.8 and the assumption (4.15) it follows that
Z
\Omega
Z
\Omega
Since
it follows from the definition of a regularized solution that u h (x) - b. q.e.d.
Next we establish the announced family of Lyapunov functionals.
Theorem 4.10
be as in (4.16). Morover, let (4.7),
(4.8), (4.9), and (4.15) be satisfied. Suppose that u h is a solution of (4.6). Then the
following properties hold
(a) (Lyapunov functionals for regularization methods) For all r 2 C 2 [a; b] with r 00 - 0,
the function
Z
\Omega
is a Lyapunov functional for a regularization method: Let
Z
\Omega
Then
1.
2.
Moreover, if r 00 ? 0 on [a; b], then V strict Lyapunov functional:
3. OE(u h only if u
on\Omega .
4. if h ? 0, then DV only if u
on\Omega .
5. V
on\Omega and u
\Omega \Theta (0; H]:
(b) (Convergence)
d=1: u h converges uniformly to Mf for h !1
d=2: lim
d=3: lim
Proof:
(a) 1. Since r 2 C 2 [a; b] with r 00 - 0 on [a; b], we know that r is convex on [a; b].
Using the gray level invariance and Jensen's inequality it follows
R\Omega r
R\Omega u h (x) dx
dy
R\Omega u h (x) dx) dy
2. From Lemma 4.5 it follows that V 2 C[0; 1[. Setting
from (4.13) and (4.8) that
The right hand side is negative since r is convex.
We represent in the following way
\Omega
R
\Omega
R
\Omega
From (4.19) and the convexity of r it follows that the last two terms in the
above chain of inequalities are negative. Thus the assertion is proved.
3. Let OE(u h us now show that the estimate (4.18) implies that
const on \Omega\Gamma Suppose that u h 6= c Since u h 2 H
1(\Omega\Gamma2 there exists a
j\Omega
Z
Z
This assertion follows from the Poincare inequality for functions in Sobolev
spaces [15]. From the strict convexity of r it follows that
r
R
\Omega u h dx
If we utilize this result in (4.18) we observe that for h ? 0 OE(u h
implies that u const
on\Omega . Thanks to the average grey value invariance
we finally obtain u
on\Omega .
We turn to the case From (1.) and (2.) it follows that
If
Thus we have that for all ' ? 0 . Using the continuity of u ' with
respect to ' 2 [0; 1[ (cf. Lemma 4.5) the assertion follows.
4. The proof is analogous to the proof of the (iv)-assertion in Theorem 3 in [50].
5. Suppose that V (0), then from (2.) it follows that
const on [0; H] :
it follows from (4.) that u Using
the continuity of u h with respect to h 2 [0; 1[ (cf. Lemma 4.5) the assertion
follows. The converse direction is obvious.
(b) From Lemma 4.7 and assumption 4.9 it follows that
Z
\Omega
This shows that
From the Sobolev embedding theorem it follows in particular that for h !1
d=1: u h converges uniformly to Mf
(note that we assumed
that\Omega is
bounded domain)
(note that we assumed
that\Omega is
bounded domain).
q.e.d.
In Theorem 4.10 we obtained similar results as for Lyapunov functional of diffusion
operators (see [50]). In (2.) of Theorem 4.10 the difference of Lyapunov functionals for
diffusion processes and regularization methods becomes evident. For Lyapunov functionals
in diffusion processes we have V 0 (t) - 0 and in regularization processes we have
is obtained from V 0 (t) by making a time discrete ansatz at time 0.
We note that this is exactly the way we compared diffusion filtering and regularization
techniques in the whole paper. It is therefore natural that the role of the time derivative
in diffusion filtering is replaced by the time discrete approximation around 0.
Example 4.11 In this example we study different regularization techniques which have
been used for denoising of images:
1. Tikhonov regularization: Here we have - g(juj 2 . In this case the assumptions
(4.7), (4.8), (4.9) and (4.15) are satisfied.
2. total variation Regularization: Here we have - g(juj 2
In this case the
assumption (4.9) is not satisfied.
However, for the modified versions, proposed by Ito and Kunisch [27], where the
functional is replaced by
(4.7), (4.8), (4.9), and (4.15) are satisfied.
For the functional [1, 9]
the assumption (4.9) is not satisfied. For the modified version
studied in [33], the assumptions (4.7), (4.8), (4.9), and (4.15) are satisfied.
For the functional
the assumptions (4.7), (4.8), (4.9), and (4.15) are satisfied. This method has been
proposed by Geman and Yang [17] and was studied extensively by Chambolle and
Lions [8] (see also [33]).
3. Convex Nonquadratic Regularizations: The functional used by Schn-orr [41]
l
l )c ae
satisfies (4.7), (4.8), (4.9), and (4.15), whereas the Green functional [18]
violates the assumption (4.9).
5 Experiments
In this section we illustrate some of the previous regularization strategies by applying
them to noisy real-world images.
Regularization was implemented by using central finite differences. In the linear case
this leads to a linear system of equations with a positive definite system matrix. It
was solved iteratively by a Gau-Seidel algorithm. It is not difficult to establish error
bounds for its solution, since the residue can be calculated and the condition number of
the matrix may be estimated using Gerschgorin's theorem. The Gau-Seidel iterations
were stopped when the relative error in the Euclidean norm was smaller than 0:0001.
Discretizing stabilized total variation regularization with
leads to a nonlinear system of equations. It was numerically solved for
combining convergent fixed point iterations as outer iterations [13] with inner iterations
using the Gau-Seidel algorithm for solving the linear system of equations. The fixed
point iteration turned out to converge quite rapidly, such that not more than 20 iterations
were necessary.
Figure
5.1 shows three common test images and a noisy variant of each of them:
an outdoor scene with a camera, a magnetic resonance (MR) image of a human head,
and an indoor scene. Gaussian noise with zero mean has been added. Its variance was
chosen to be a quarter, equal and four times the image variance, respectively, leading to
signal-to-noise (SNR) ratios of 4, 1, and 0.25.
The goal of our evaluation was to find out which regularization leads to restorations
which are closest to the original images. We applied linear and total variation regularization
to the three noisy test images, used 1, 4, and 16 regularization steps and varied
the regularization parameter until the optimal restoration was found. The distance to
the original image was computed using the Euclidean norm. The results are shown in
Table
1, as well as in Figs. 5.2 and 5.3. This gives rise to the following conclusions:
ffl In all cases, total variation regularization performed better than Tikhonov regular-
ization. As expected, total variation regularization leads to visually sharper edges.
The TV-restored images consist of piecewise almost constant patches.
ffl In the linear case, iterated Tikhonov regularization produced better restorations
than noniterated. Visually, noniterated regularization resulted in images with more
high-frequent fluctuations. This is in complete agreement with the theoretical
considerations in our paper. Improvements caused by iterating the regularization
were mainly seen between 1 and 4 iterations. Increasing the iteration number to 16
did hardly lead to further improvements, in one case the results were even slightly
worse.
ffl It appears that the theoretical and experimental results in the linear setting do
not carry over to the nonlinear case with total variation regularization: regularization
was extremely robust: different iteration numbers gave similar results,
and the optimal total regularization parameter did not depend much on the iteration
number. Thus, in practice one should give the preference to the faster
method. In our case iterated regularization was slightly more efficient, since it
led to matrices with smaller condition numbers and the Gau-Seidel algorithm
converged faster. Using for instance multigrid methods, which solve the linear
systems with a constant effort for all condition numbers, would make noniterated
total variation regularization favourable.
In a final experiment we juxtapose the regularizations (3.20) and (3.21) of the Perona-Malik
filter. Both processes have been implemented using an explicit finite difference
scheme. The results using the MR image from Figure 5.1(c) are shown in Figure 5.4,
where different values for fl, the standard deviation of the Gaussian, have been used. For
small values of fl, both filters produce rather similar results, while larger values lead to
a completely different behaviour. For (3.20), the regularization smoothes the diffusive
flux, so that it becomes close to 0 everywhere, and the image remains unaltered. The
regularization in (3.21), however, creates a diffusivity which gets closer to 1 for all image
locations, so that the filter creates blurry results resembling linear diffusion filtering.
6
Summary
The goal of this paper was to investigate connections between regularization theory
and the framework of diffusion filtering. The regularization methods we considered were
Tikhonov regularization, total variation regularization, and we focused on linear diffusion
filters as well as regularizations of the nonlinear diffusion filter of Perona and Malik. We
have established the following results:
ffl We analyzed the restoration properties of iterated and noniterated regularization
both theoretically and experimentally. While linear regularization can be improved
by iteration, there is no clear evidence that this is also the case in the nonlinear
setting.
ffl We introduced an alternative regularization of the Perona-Malik filter. In contrast
to previous regularization, it allows a formulation as a minimizer of a suitable
energy functional.
ffl We have established Lyapunov functionals and convergence results for regularization
methods using a similar theory as for nonlinear diffusion filtering.
These results can be regarded as contributions towards a deeper understanding as well
as a better justification of both paradigms. It appears interesting to investigate the
following topics in the future:
Table
1: Best restoration results for the different
methods and images. The total regularization
parameter for N iterations with parameter h is
denoted Nh, and the distance describes the
average Euclidean distance per pixel between the
restored and the original image without noise.
image regularization t distance
camera linear, 1 iteration 0.82 15.41
camera linear, 4 iterations 0.54 15.06
camera linear, iterations 0.48 15.02
MR linear, 1 iteration 2.05 23.09
MR linear, 4 iterations 1.16 22.62
MR linear, iterations 1.02 22.64
office linear, 1 iteration 5.7 31.76
office linear, 4 iterations 3.3 30.47
office linear, iterations 2.9 30.45
camera TV, 1 iteration 13.2 11.92
camera TV, 4 iterations 12.8 12.10
camera TV, iterations 12.4 12.19
MR TV, 4 iterations 33.5 20.52
MR TV, iterations 33 20.65
office TV, 1 iteration 102 28.66
office TV, 4 iterations 104 27.99
office TV, iterations 106 28.05
Figure
5.1: Test
scene. (b) Top Right: Gaussian noise added, SNR=4. (c) Middle
Left: Magnetic resonance image. (d) Middle Right: Gaussian
noise added, SNR=1. (e) Bottom Left: Office scene. (f)
Bottom Right: Gaussian noise added, SNR=0.25.
Figure
5.2: Optimal restoration results for Tikhonov regularization.
Figure
5.3: Optimal restoration results for total variation regular-
Figure
5.4: Comparison of two regularizations of the Perona-Malik
filter Filter (3.20),
Right: Filter (3.21), 0:5. (c) Middle Left: Filter (3.20),
2. (d) Middle Right: Filter (3.21), 2. (e) Bottom
Left: Filter (3.20), 8. (f) Bottom Right: Filter (3.21),
ffl Regularization scale-spaces. So far, scale-space theory was mainly expressed in
terms of parabolic and hyperbolic partial differential equations. Since scale-space
methods have contributed to various interesting computer vision applications, it
seems promising to investigate similar applications for regularization methods.
Fully implicit methods for nonlinear diffusion filters using a single time step. This is
equivalent to regularization and may be highly useful, if fast numerical techniques
for solving the arising nonlinear systems of equations are applied.
--R
Analysis of bounded variation penalty methods for ill-posed problems
Coll T.
A nonlinear primal-dual method for total-variation based image restoration
Total variation blind deconvolution
Image recovery via total variation minimization and related problems
Two deterministic half-quadratic regularization algorithms for computed imaging
Weak Continuity and Weak Lower Semicontinuity of Non-Linear Functionals
Direct Methods in the Calculus of Variations
Analysis of regularized total variation penalty methods for denoising
Convergence of an iterative method for total variation denoising
Regularization of Inverse Problems
Measure Theory and Fine Properties of Functions
Image Structure
Nonlinear image recovery with half-quadratic regulariza- tion
Bayesian reconstructions from emission tomography data using a modified EM algorithm
The Theory of Tikhonov regularization for Fredholm Equations of the First Kind
Spectral methods for linear inverse problems with unbounded operators
Optimal order of convergence for stable evaluation of differential operators
Minimal Surfaces and Functions of Bounded Variation
Elliptic partial differential equations of second order 2nd
Nonstationary iterated Tikhonov regularization J.
Introduction to Spectral Theory in Hilbert Space
An active set strategy for image restoration based on the augmented Lagrangian formulation
A computational algorithm for minimizing total variation in image enhancement
Methods for Solving Incorrectly Posed Problems
Nonlinear image filtering with edge and corner enhancement
Least squares and bounded variation regularization
Scale space and edge detection using anisotropic diffusion
Functional Analysis
Nonlinear total variation based noise removal algorithms
Stable evaluation of differential operators and linear and nonlinear milti-scale filtering
Denoising with higher order derivatives of bounded variation and an application to parameter estimation
Gaussian Scale-Space Theory
Relation of regularization parameter and scale in total variation based image denoising
Solutions of Ill-Posed Problems
On edge detection
Lineare Operatoren in Hilbertr-aumen
Anisotropic diffusion filters for image processing based quality control
A review of nonlinear diffusion filtering
Anisotropic Diffusion in Image Processing
Partielle Differentialgleichungen
--TR
On edge detection
Direct methods in the calculus of variations
Scale-Space and Edge Detection Using Anisotropic Diffusion
Feature-oriented image enhancement using shock filters
Biased anisotropic diffusion
Spectral methods for linear inverse problems with unbounded operators
Nonlinear Image Filtering with Edge and Corner Enhancement
Nonlinear total variation based noise removal algorithms
A degenerate pseudoparabolic regularization of a nonlinear forward-backward heat equation arising in the theory of heat and mass exchange in stably stratified turbulent shear flow
Variational methods in image segmentation
Convergence of an Iterative Method for Total Variation Denoising
Regularization, Scale-Space, and Edge Detection Filters
Denoising with higher order derivatives of bounded variation and an application to parameter estimation
Nonstationary iterated Tikhonov regularization
Geometry-Driven Diffusion in Computer Vision
Scale-Space Theory in Computer Vision
Gaussian Scale-Space Theory
A Review of Nonlinear Diffusion Filtering
Scale-Space Properties of Regularization Methods
--CTR
Markus Grasmair, The Equivalence of the Taut String Algorithm and BV-Regularization, Journal of Mathematical Imaging and Vision, v.27 n.1, p.59-66, January 2007
Walter Hinterberger , Michael Hintermller , Karl Kunisch , Markus Von Oehsen , Otmar Scherzer, Tube Methods for BV Regularization, Journal of Mathematical Imaging and Vision, v.19 n.3, p.219-235, November | regularization;image restoration;total variation denoising;diffusion filtering;inverse problems |
338435 | The Topological Structure of Scale-Space Images. | We investigate the deep structure of a scale-space image. The emphasis is on topology, i.e. we concentrate on critical pointspoints with vanishing gradientand top-pointscritical points with degenerate Hessianand monitor their displacements, respectively generic morsifications in scale-space. Relevant parts of catastrophe theory in the context of the scale-space paradigm are briefly reviewed, and subsequently rewritten into coordinate independent form. This enables one to implement topological descriptors using a conveniently defined coordinate system. | Introduction
1.1 Historical Background
A fairly well understood way to endow an image with a topology is to embed it into a one-parameter
family of images known as a "scale-space image". The parameter encodes "scale" or "resolution"
(coarse/fine scale means low/high resolution, respectively).
Among the simplest is the linear or Gaussian scale-space model. Proposed by Iijima [13] in the
context of pattern recognition it went largely unnoticed for a couple of decades, at least outside the
Japanese scientific community. Another early Japanese contribution is due to Otsu [32]. The Japanese
accounts are quite elegant and can still be regarded up-to-date in their way of motivating Gaussian
scale-space; for a translation, the reader is referred to Weickert, Ishikawa, and Imiya [41]. The earliest
accounts in the English literature are due to Witkin [42] and Koenderink [18]. Koenderink's account
is particularly instructive for the fact that the argumentation is based on a precise notion of causality
(in the resolution domain), which allows one to interpret the process of blurring as a well-defined
generalisation principle akin to similar ones used in cartography, and also for the fact that it pertains to
topological structure.
1.2 Scale and Topology
The quintessence is that scale provides topology. In fact, by virtue of the scale degree of freedom
one obtains a hierarchy of topologies enabling transitions between coarse and fine scale descriptions.
This is often exploited in coarse-to-fine algorithms for detecting and localising relevant features (edges,
corners, segments, etc.
The core problem-the embodiment of a decent topology-had already been addressed by the mathematical
community well before practical considerations in signal and image analysis boosted the development
of scale-space theory. Of particular interest is the theory of tempered distributions formulated
by Laurent Schwartz in the early fifties [34]. Indeed, the mere postulate of positivity imposed on the
admissible test functions proposed by Schwartz, together with a consistency requirement 1 suffices to
1 The consistency requirement, the details of which are stated elsewhere [5, 6], imposes a convolution-algebraic structure on
admissible filter classes in the linear case at hand. Both Schwartz' ``smooth functions of rapid decay'' as well as Koenderink's
Gaussian family are admissible. The autoconvolution algebra generated by the normalised zeroth order Gaussian scale-space
filter is unique in Schwartz space given the constraint of positivity.
single out Gaussian scale-space theory from the theory of Schwartz. Moreover, straightforward application
of distribution theory readily produces the complete Gaussian family of derivative filters as
proposed by Koenderink in the framework of front-end visual processing [24]. For details on Schwartz'
theory and its connection to scale-space theory cf. the monograph by Florack [6]. In view of ample
literature on the subject we will henceforth assume familiarity with the basics of Gaussian scale-space
theory [6, 12, 29, 35].
1.3 Deep Structure
In their original accounts both Koenderink as well as Witkin proposed to investigate the "deep structure"
of an image, i.e. structure at all levels of resolution simultaneously. Today, the handling of deep structure
is still an outstanding problem in applications of scale-space theory. Nevertheless, many heuristic
approaches have been developed for specific purposes that do appear promising. These typically utilise
some form of scale selection and/or linking scheme, cf. Bergholm's edge focusing scheme [2], Linde-
berg's feature detection method [29, 30], the scale optimisation criterion used by Niessen et al. [31]
and Florack et al. [9] for motion extraction, Vincken's hyperstack segmentation algorithm [40], etc.
Encouraged by the results in specific image analysis applications an increasing interest has recently
emerged trying to establish a generic underpinning of deep structure. Results from this could serve as
a common basis for a diversity of multiresolution schemes. Such bottom-up approaches invariably rely
on catastrophe theory.
1.4 Catastrophe Theory
An early systematic account of catastrophe theory is due to Thom [37, 38], although the interested
reader will probably prefer Poston and Stewart's [33] or Arnold's account [1] instead. Koenderink has
pointed out that a scale-space image defines a versal family, to which Thom's classification theorem
can be applied [10, 33, 37, 38]. "Versal" means that almost all members are generic (i.e. "typical"
in a precise sense). However, although this is something one could reasonably expect, it is not self-
evident. On the one hand, the situation is simplified by virtue of the existence of only one control
parameter: isotropic inner scale. On the other hand, there is a complication, viz. the fact that scale-space
is constrained by a p.d.e. 2 : the isotropic diffusion equation. The control parameter at hand is
special in the sense that it is in fact the evolution parameter of this p.d.e.
Catastrophe theory in the context of the scale-space paradigm is now fairly well-established. It
has been studied, among others, by Damon [4]-probably the most comprehensive account on the
subject-as well as by Griffin [11], Johansen [14, 15, 16], Lindeberg [27, 28, 29], and Koenderink
[19, 20, 21, 22, 25]. An algorithmic approach has been described by Tingleff [39]. Closely related to
the present article is the work by Kalitzin [17], who pursues a nonperturbative topological approach.
Canonical versus Covariant Formalism
The purpose of the present article is twofold: (i) to collect relevant results from the literature on catastrophe
theory, and (ii) to express these in terms of user-defined coordinates. More specifically we derive
covariant expressions for the tangents to the critical curves in scale-space, both through Morse as well
as non-Morse critical points (or top-points 3 ), establish a covariant interpolation scheme for the locations
3 The term "top-point" is somewhat misleading; we will use it to denote any point in scale-space where critical points
merge or separate.
of the latter in scale-space, and compute the curvature of the critical curves at the top-points, again in
covariant form.
The requirement of covariance is a novel and important aspect not covered in the literature. It entails
that one abstains-from the outset-from any definite choice of coordinates. The reason for this is that
in practice one is not given the special, so-called "canonical coordinates" in terms of which catastrophe
theory is invariably formulated in the literature. Canonical coordinates are chosen to look nice on paper,
and as such greatly contribute to our understanding, but in the absence of an operational definition they
are of little practical use. A covariant formalism-by definition-allows us to use whatever coordinate
convention whatsoever. All computations can be carried out in a global, user-defined coordinate system,
say a Cartesian coordinate system aligned with the grid of the digital image.
Theory
Theory is presented as follows. First we outline the general plan of catastrophe theory (Section 2.1), and
then consider it in the context of scale-space theory (Section 2.2). An in-depth analysis is presented in
subsequent sections in canonical (Section 2.3), respectively arbitrary coordinate systems (Section 2.4).
The first three sections mainly serve as a review of known facts scattered in the literature, and more or
less suffice if the sole purpose is to gain insight in deep structure. The remainder covers novel aspects
that are useful for exploiting this insight in practice, i.e. for coding deep structure given an input image.
2.1 The Gist of Catastrophe Theory
A critical point of a function is a point at which the gradient vanishes. Typically this occurs at isolated
points where the Hessian has nonzero eigenvalues. The Morse Lemma states that the qualitative properties
of a function at these so-called Morse critical points are essentially determined by the quadratic
part of the Taylor series (the Morse canonical form).
However, in many practical situations one encounters families of functions that depend on control
parameters. An example of a control parameter is scale in a scale-space image. Catastrophe theory is
the study of how the critical points change as the control parameters change.
While varying a control parameter in a continuous fashion, a Morse critical point will move along
a critical curve. At isolated points on such a curve one of the eigenvalues of the Hessian may become
zero, so that the Morse critical point turns into a non-Morse critical point. Having several control
parameters to play with one can get into a situation in which ' eigenvalues of the Hessian vanish simul-
taneously, leaving them nonzero. The Thom Splitting Lemma simplifies things: It states that, in
order to study the degeneracies, one can simply discard the variables corresponding to the
regular (n \Gamma ') \Theta (n \Gamma ')-submatrix of the Hessian, and thus study only the ' "bad" ones [37, 38]. That
is, one can split up the function into a Morse and a non-Morse part, and study the canonical forms of
each in isolation, because the same splitting result holds in a full neighbourhood of a non-Morse func-
tion. Again, the Morse part can be canonically described in terms of the quadratic part of the Taylor
series. The non-Morse part can also be put into canonical form, called the catastrophe germ, which is a
polynomial of order 3 or higher.
The Morse part does not change qualitatively after a small perturbation. Critical points may move
and corresponding function values may change, but nothing will happen to their type: if i eigenvalues
of the Hessian are negative prior to perturbation (a "Morse i-saddle"), then this will still be the case
afterwards. Thus-from a topological point of view-there is no need to scrutinise the perturbations.
The non-Morse part, on the other hand, does change qualitatively upon perturbation. In general,
the non-Morse critical point of the catastrophe germ will split into a number of Morse critical points.
A
+scale
space
Figure
1: The generic catastrophes in isotropic scale-space. Left: annihilation of a pair of Morse
critical points. Right: creation of a pair of Morse critical points. In both cases the points involved
have opposite Hessian signature. In 1D, positive signature signifies a minimum, while a negative one
indicates a maximum; creation is prohibited by the diffusion equation. In multidimensional spaces
creations do occur generically, but are typically not as frequent as annihilations.
This state of events is called morsification. The Morse saddle types of the isolated Morse critical
points involved in this process are characteristic for the catastrophe. Thom's Theorem provides an
exhaustive list of "elementary catastrophes" canonical formulas for
the catastrophe germs as well as for the perturbations needed to describe their morsification [37, 38].
2.2 Catastrophe Theory and the Scale-Space Paradigm
One should not carelessly transfer Thom's results to scale-space, since there is a nontrivial constraint
to be satisfied: Any scale-space image, together with all admissible perturbations, must satisfy the
isotropic diffusion equation. Damon has shown how to extend the theory in this case in a systematic
way [4].
That Damon's account is somewhat complex is mainly due to his aim for completeness and rigour.
If we restrict our attention to generic situations only, and consider only "typical" input images that
are not subject to special conditions such as symmetries, things are actually fairly simple. The only
generic morsifications in scale-space are creations and annihilations of pairs of Morse hypersaddles of
opposite Hessian signature Fig. 1 (for a proof, see Damon [4]). Everything else can be expressed as
a compound of isolated events of either of these two types (although one may not always be able to
segregate the elementary events due to numerical limitations).
In order to facilitate the description of topological events, Damon's account, following the usual line
of approach in the literature, relies on a slick choice of coordinates. However, these so-called "canonical
coordinates" are inconvenient in practice, unless one provides an operational scheme relating them to
user-defined coordinates. Mathematical accounts fail to be operational in the sense that-in typical
cases-canonical coordinates are at best proven to exist. Their mathematical construction often relies
on manipulations of the physically void trailing terms of a Taylor series expansion, in other words,
on derivatives up to infinite order, and consequently lacks an operational counterpart. Even if one
were in the possession of an algorithm one should realize that canonical coordinates are in fact local
coordinates. Each potential catastrophe in scale-space would thus require an independent construction
of a canonical frame.
The line of approach that exploits suitably chosen coordinates is known as the canonical formalism.
It provides the most parsimonious way to approach topology if neither metrical relations nor numerical
computations are of interest. Thus its role is primarily to understand topology. In the next section we
give a self-contained summary of the canonical formalism for the generic cases of interest.
"Hessian signature" means "sign of the Hessian determinant evaluated at the location of the critical point".
2.3 Canonical Formalism
The two critical points involved in a creation or annihilation event always have opposite Hessian signature
(this will be seen below), so that this signature may serve to define a conserved "topological
charge" intrinsic to these critical points. It is clear (by definition!) that the charge of Morse-critical
points can never change, as this would require a zero-crossing of the Hessian determinant, violating
the Morse criterion that all Hessian eigenvalues should be nonzero. Thus the interesting events are the
interactions of charges within a neighbourhood of a non-Morse critical point.
Morse-critical point is assigned a topological charge
corresponding to the sign of the Hessian determinant evaluated at that point. A regular point has zero
topological charge. The topological charge of a non-Morse critical point equals the sum of charges of
all Morse-critical points involved in the morsification.
In anticipation of the canonical coordinate convention, in which the first variable is identified to be the
"bad" one, and in which also a second somewhat special direction shows up, it is useful to introduce
the following notation.
Notation 1 We henceforth adhere to the following coordinate conventions:
Instead of x 1 and x 2 we shall write x and y, respectively.
This notation will allow us to account for signals and images of different dimensions (typically
within a single theoretical framework.
Using Notation 1 we define the catastrophe germs
together with their perturbations
The quadric Q(y; t) is defined as follows:
in which each ffl k is either +1 or \Gamma1.
Note that germs as well as perturbations satisfy the diffusion equation
@t
In the canonical formalism it is conjectured that, given a generic event in scale-space, one can always
set up coordinates in such a way that qualitative behaviour is summarised by one of the two "canonical
given above. Note that, even though it does describe the effect of a general perturbation in a
full scale-space neighbourhood of the catastrophe, the quadric actually does not depend on x. At the
location of the catastrophe exactly one Hessian eigenvalue vanishes. The forms f A (x; t) and f C (x; t)
correspond to an annihilation and a creation event at the origin, respectively (v.i. The latter requires
creations will not be observed in 1D signals.
Both events are referred to as "fold catastrophes". The diffusion equation imposes a constraint
that manifests itself in the asymmetry of these two canonical forms. In fact, whereas the annihilation
event is relatively straightforward, a subtlety can be observed in the creation event, viz. the fact that the
possibility for creations to occur requires space to be at least two-dimensional 5 . The asymmetry of the
two generic events reflects the one-way nature of blurring; topology tends to simplify as scale increases,
albeit not monotonically.
2.3.1 The A-Germ
Morsification of the A-germ of Definition 2 entails an annihilation of two critical points of opposite
charge as resolution is diminished.
(Morsification of the A-Germ) Recall Definition 2. For t ! 0 we have two Morse-critical
points carrying opposite charge, for t ? 0 there are none. At the two critical points collide and
annihilate. The critical curves are parametrised as follows:
\Gamma2t;
Fig. 2. It follows from the parametrisation that the critical points collide with infinite opposite
velocities before they disappear. Thus one must be cautious and take the parametrisation into account
if one aims to link corresponding critical points near annihilation.
Annihilations of the kind described by Result 1 are truly "one-dimensional" events. At the origin
both branches of critical curves are tangential to the (x; t)-plane, and in fact approach eachother from
opposite spatial directions tangential to canonical case-perpendicular to the Hessian
zero-crossing 6 . In numerical computations one must account for the fact that near annihilation
corresponding critical points are separated by a distance of the order O(
\Deltat) if \Deltat is the "time 7 -to-
collision".
For 1D signals this summarises the analysis of generic events in scale-space. For images there are
other possibilities, which are studied below. In 2D images the present case describes the annihilation of
a minimum or maximum with a saddle. Minima cannot annihilate maxima, nor can saddles annihilate
In 3D images one has two distinct types of hypersaddles, one with a positive and one with
a negative topological charge. Also minima and maxima have opposite charges in this case, and so
there are various possibilities for annihilation all consistent with charge conservation. However, charge
conservation is only a constraint and does not permit one to conclude that all events consistent with it
will actually occur. In fact, by continuity and genericity one easily appreciates that a Morse i-saddle
can only interact with a (i \Gamma 1)-saddle because one and only one Hessian eigenvalue
is likely to change sign when traversing the top-point (i.e. the degenerate critical point) along the critical
path. Genericity implies that sufficiently small perturbations will not affect the annihilation event
qualitatively. It may undergo a small dislocation in scale-space, but it is bound to occur.
5 The germ f C
seems to capture another catastrophe happening at a somewhat coarser scale some distance away from
the origin, yet invariably coupled to the creation event. However, it should be stressed that canonical forms like these are not
intended to describe events away from the origin. Indeed, the associated "scatter" event turns out to be highly nongeneric, and
is therefore of little practical interest.
6 "Hessian zero-crossing" is shorthand for "zero-crossing of the Hessian determinant".
7 "Time" in the sense of the evolution parameter of Eq. (1).
Figure
2: In 2D, positive Hessian determinant signifies an extremum, while a negative one indicates a
saddle. The morsification is visualised here for the annihilation event (Result 1), showing five typical,
fixed-scale local pictures at different points on or near the critical curve.
2.3.2 The C-Germ
Morsification of the C-germ of Definition 2 entails a creation of two critical points of opposite charge
as resolution is diminished. (For a while there has been some confusion about this in the literature;
creation events were-falsely-believed to violate the causality principle that is the core of scale-space
theory [18].) The event of interest here is the one occurring in the immediate vicinity of the origin.
(Morsification of the C-Germ) Recall Definition 2. For t ! 0 there are no Morse-critical
points in the immediate neighbourhood of the origin. At critical points of opposite charge
emerge producing two critical curves for t ? 0. The critical curves are parametrised as follows:
Again charges are conserved, and again the emerging critical points escape their point of creation with
infinite opposite velocities. Genericity implies that creations will persist despite perturbations, and will
suffer at most a small displacement in scale-space.
2.3.3 The Canonical Formalism: Summary
To summarize, creation and annihilation events together complete the list of possible generic catastro-
phes. The canonical formalism enables a fairly simple description of what can happen topologically.
However, canonical coordinates do not coincide with user-defined coordinates, and cease to be useful
if one aims to compute metrical properties of critical curves. This limitation led us to develop the
covariant formalism, which is presented in the next section.
2.4 Covariant Formalism
In practice the separation into "bad" and "nice" coordinate directions is not given. The actual realization
of canonical coordinates varies from point to point, a fact that might lead one to believe that it
requires an expensive procedure to handle catastrophes in scale-space. However, the covariant formalism
declines from the explicit construction of canonical coordinates altogether. It allows us (i) to carry
out computations in any user-defined, global coordinate system, requiring only a few image convolutions
per level of scale, and (ii) to compute metrical properties of topological events (angles, directions,
velocities, accelerations, etc.
The covariant formalism relies on tensor calculus. The only tensors we shall need are (i) metric tensor
- and its dual g - (the components of which equal the Kronecker symbol
in a Cartesian frame,
and its dual " - 1 :::- n in n dimensional space,
and (iii) covariant image derivatives (equal to partial derivatives in a Cartesian frame). In a Cartesian
frame the Levi-Civita tensor is defined as the completely antisymmetric tensor with "
this any other nontrivial component follows from permuting indices and toggling signs. Actually, we
will only encounter products containing an even number of Levi-Civita tensors, which can always be
rewritten in terms of metric tensors only (see e.g. Florack et al. [7] for details). Wherever possible we
will use matrix notation to alleviate theoretical difficulties so that familiarity with the tensor formalism
is not necessary.
Derivatives are computed by linear filtering:
Z
Here,
(z) is the k-th order transposed covariant derivative of the normalised Gaussian
OE(z) with respect to z - tuned to the location and scale of interest (these parameters have been
left out for notational simplicity), and f(z) represents the raw image. In particular, the components
of the image gradient and Hessian are denoted by L - and L - , respectively. Instead of "covariant
derivative" one can read "partial derivative" as long as one sticks to Cartesian frames or rectilinear
coordinates. (This is all we need below.) Distributional differentiation according to Eq. (2) is well-posed
because it is actually integration. Well-posedness admits discretisation and quantisation of Eq. (2), and
guarantees that other sources of small scale noise are not fatal. Of course the filters need to be realistic;
for scale-space filters this means that one keeps their scales confined to a physically meaningful interval,
and that one keeps their differential order below an appropriate upper bound [3]. Equally important is
the observation that Eq. (2) makes differentiation operationally well-defined. One can actually extract
derivatives from an image in the first place, because things are arranged in such a way that, unlike with
"classical" differentiation and corresponding numerical differencing schemes, differentiation precedes
discrete sampling. In practice one will almost always calculate derivatives at all points in the image
domain; in that case Eq. (2) is replaced by a convolution of f and OE - 1 :::- k
(the minus sign is then
implicit).
The ensemble of image derivatives up to k-th order provides a model of local image structure in a
full scale-space neighbourhood, known as the local jet of order k [8, 10, 23, 24, 26, 33]. Here it suffices
to consider structure up to fourth order at the voxel 8 of interest (summation convention applies):
8 The term "voxel" refers to a "pixel" in arbitrary dimensions.
The constraints for a non-Morse critical point are
r
det rr T (4)
which become generic in (n 1)-dimensional scale-space. For a Morse critical point one simply omits
the determinant constraint, leaving n equations in n unknowns (and 1 scale parameter).
Let us investigate the system of Eqs. (3-4) in the immediate vicinity of a critical point of interest.
Assume that (x; labels a fiducial grid point near the desired zero-crossing, which has been
designated as the base point for the numerical coefficients of Eq. (3). Both gradient as well as Hessian
determinant at the corresponding (or any neighbouring) voxel will be small, though odds are that they
are not exactly zero. Then we know that Eq. (4) will be solved for (x; t) - (0; 0), and we may use
perturbation theory for interpolation to establish a lowest order sub-voxel solution.
The details are as follows. Introduce a formal parameter " - 0 corresponding to the order of
magnitude of the left hand sides of Eq. (4) at the fiducial origin. Substitute (x;
(4) and collect terms of order O(") (the terms of order zero vanish by construction). Absorbing the
formal parameter back into the scaled quantities the result is the following linear system:
e
in which the e
L - are the components of the transposed cofactor matrix obtained from the Hessian
Appendix
A), and kL - k denotes the Hessian determinant 9 . The determinant constraint (last identity)
follows from a basic result in perturbation theory for matrices:
In Eq. (5) both the coefficients on the left hand side as well as the data on the right hand side can be
obtained by staightforward linear filtering of the raw image as defined by Eq. (2), so that we indeed
have an operationally defined interpolation scheme for locating critical points within the scale-space
continuum. It is important to note that the system of Eq. (5) holds in any coordinate system (manifest
covariance). We will exploit this property in our algorithmic approach later on.
Our next goal is to invert the system of Eq. (5) while maintaining manifest covariance. This obviates
the need for numerical inversions or the construction of canonical frames. Such methods would have to
be applied to each and every candidate voxel in scale-space, while neither would give us much insight
in local critical curve geometry. The inversion differs qualitatively for Morse and non-Morse critical
points and so we consider the two cases separately.
It is convenient to rewrite Eq. (5) in matrix form with the help of the definitions
z -
e
e
9 This abuse of notation-there are actually no free indices in kL-k-is common in classical tensor calculus.
Note that
so that we may conclude that all relevant information is contained in first order spatial and scale derivatives
of the image's gradient and Hessian determinant.
With this notation (n coefficient matrix of Eq. (5) becomes
z T c
For Morse critical points at fixed resolution the relevant subsystem in the hyperplane
but in fact we obtain a linear approximation of the critical curve through the Morse critical point of
interest if we allow scale to vary:
This can be easily generalised to any desired order. For top-points we must consider the full system
x
det H
2.4.1 Morse Critical Points
From Eq. (18) it follows that at level the tangent to the critical path in scale-space is given by
x
x 0#
c
in which the sub-voxel location of the Morse critical point is given by
and its instantaneous scale-space velocity-i.e. the displacement in scale-space per unit of t-by
v#
\GammaH inv w#
Note that the path followed by Morse critical points is always transversal to the hyperplane
is why we can set the scale component equal to unity. In other words, such critical points can never
vanish "just like that"; they necessarily have to change identity into a non-Morse variety. According to
Eq. (22), spatial velocity v becomes infinite as the point moves towards a degeneracy (odds are that w
remains nonzero). If we do not identify "time" with scale, but instead reparametrise
scale-space velocity-now defined as the displacement per unit of -becomes
Hw
det H
With this refinement of the scale parameter the singularity is approached "horizontally" from a spatial
direction perpendicular to the null-space of the Hessian (note, e.g. by diagonalising the Hessian, that e
becomes singular, yet remains finite when eigenvalues of H degenerate). The trajectory of the critical
point continues smoothly through the top-point, where its "temporal sense" is reversed. This picture
of the generic catastrophe captures the fact that there are always pairs of critical points of opposite
Hessian signature that "belong together", either because they share a common fate (annihilation) or
because they have a common cause (creation). The two members of such a pair could therefore be
seen as manifestations of a single "topological particle" if one allows for a non-causal interpretation,
in much the same way as one can interpret positrons as instances of electrons upon time-reversal. The
analogy with particle physics can be pursued further, as Kalitzin points out, by modeling catastrophes
in scale-space as interactions conserving a topological charge [17]. Indeed, charges are operationally
well-defined conserved quantities that add up under point interactions at non-Morse critical points,
irrespective their degree of degeneracy. This interpretation has the advantage that one can measure
charges from spatial surface integrals around the point of interest (by using Stokes' theorem), thus
obtaining a "summary" of qualitative image structure in the interior irrespective of whether the enclosed
critical points are generic or not. So far, however, Kalitzin's approach has not been refined to the sub-voxel
domain, and does not give us a local parametrisation of the critical curve.
The perturbative approach can be extended to higher orders without essential difficulties, yielding
a local parametrisation of the critical path of corresponding order. It remains a notorious problem to
find the optimal order in numerical sense, because it is clear that although the addition of yet another
order will reduce the formal truncation error due to the smaller Taylor tail discarded, it will at the same
time increment the amount of intrinsic noise due to the computation of higher order derivatives. It is
beyond the scope of this paper to deal with this issue in detail; a point of departure may be Blom's study
of noise propagation under simultaneous differentiation and blurring [3]. We restrict our attention to
lowest nontrivial order. For Morse critical points this is apparently third order, for top-points this will
be seen to require fourth order derivatives.
If one knows the location of the top-point one can find a similar critical curve parametrisation in
terms of the parameter - , starting out from this top-point instead of a Morse critical point. In that
case we first have to solve the top-point localisation problem. It is clearly of interest to know the
parametrisation at the top-point, since this will enable us to identify the two corresponding branches of
the Morse critical curves that are glued together precisely at this point. Our next objective will be to
find the location of the top-point with sub-voxel precision, as well as geometric properties of the critical
curve passing through.
2.4.2 Top-Points
The reason why we must be cautious near top-points is that Eq. (21) breaks down at degeneracies of
the Hessian, and is therefore likely to produce unreliable results as soon as we come too close to such
a point. A differential invariant [7] that could be used to trigger an alarm 10 is t
- oe 4n det 2 H for any - ? 0 (exponents have been chosen as such for reasons of homogeneity); in the
case t - top-point. "Hot-spots" in scale-space thus correspond to
regions where t - (x; t) becomes smaller than some suitably chosen small parameter times its average
value over the scale-space domain (say). In those regions we must study the full system of Eq. (19), including
the degeneracy constraint. The additional scale degree of freedom obviously becomes essential,
because top-points will typically be located in-between two precomputed levels of scale.
Recall Eq. (16). Let us rewrite the corresponding cofactor matrix, the Cartesian coefficients of
zero-crossings method for g and det H is, however, the preferred choice, as it preserves connectivity.
which are defined by (cf. Appendix
f
into a similar block form:
f
z T c
By substitution one may verify that the defining equation f I (n+1)\Theta(n+1) is satisfied iff
the coefficients are defined as follows:
Note that
Hw, and Recalling Eq. (23) one observes that (w; c) =
(v
det M =n!
or, in coordinate-free notation, det
Hwz T ). At the location of a critical point this is
proportional to the scale-space scalar product of the critical point's scale-space velocity and the scale-space
normal of the Hessian zero-crossing (recall Eqs. (14-15) and Eq. (23) and the remark above):
det
(Transversality Hessian Zero-Crossing/Critical Curve) At a top-point the critical path intersects
the Hessian zero-crossing transversally.
This readily follows by inspection of the tangent hyperplane to the Hessian zero-crossing,
z
and the critical curve's tangent vector, Eq. (23). The cosine of the angle of intersection follows from
Eq. (31), which is nonzero in the generic case; genericity implies transversality.
With the established results it is now possible to invert the linear system of Eq. (19); just note that
f
so that "
x
Hg +wc
z
The expression is valid in any coordinate system as required. Note that the sign of det M subdivides
the image domain into regions to which all generic catastrophes are confined. In fact, the following
lemma holds.
(Segregation of Creations and Annihilations) det M ! 0 at annihilations, det M ? 0 at
creations.
One way to see this is to note that it holds for the canonical forms f A (x; t) and f C (x; t) of Definition 2.
If we now transform these under an arbitrary coordinate transformation that leaves the diffusion equation
invariant, it is easily verified that the sign of det M is preserved. An alternative proof based on
geometric reasoning is given below.
First consider an annihilation event, and recall Eq. (14), and the geometric interpretation
of Eqs. (27) and (29) as the scale-space velocity given by Eq. (23). As the topological particle
with positive charge (i.e. the Morse-critical point with det H ? moves towards the catastrophe (to-
wards increasing scale), the magnitude of det H must necessarily decrease. By the same token, as the
anti-particle moves away from the catastrophe (towards decreasing scale), the magnitude
of det H must decrease as well. But recall that at the catastrophe det just the directional
derivative of det H in the direction of motion as indicated. Therefore det M ! 0.
Next consider a creation event. The positive particle now escapes the singularity in the positive
scale direction, whereas the negative particle approaches it in the negative scale direction, so that along
the prescribed path det H must necessarily increase. In other words, det M ? 0 at the catastrophe.
This completes the proof.
The lemma is a special case of the following, more general result, which gives us the curvature of the
critical path at the catastrophe.
(Curvature of Critical Path at the Catastrophe) At the location of a generic catastrophe
the critical path satisfies
z T x
The curvature of the critical path at the catastrophe is given by (w T r) 2 t catastrophe = det M.
Consider the local 2-jet expansion at the location of a generic catastrophe:
r
(2)
det rr T (2)
in which L From this it follows that along the critical path through the catastrophe
Contraction with z -
e
noting that e
at the catastrophe, and using Eq. (27), yields
z
z T x
Note that the first order directional derivative w T
at the catastrophe, so that first
order terms disappear. Also recall that z T det M at such a point, so that the first result follows.
Straightforward differentiation produces the curvature expression.
2.4.3 Explicit Results from the Covariant Formalism
Having established covariant expressions we have drawn several geometric conclusions that do not
follow from the canonical formalism. Here we give a few more examples, using explicit Cartesian
coordinates.
Example 1 (Tangent Vector to Critical Curve) At any point on the critical curve-including the top-
point-the scale-space tangent vector is proportional to that given by Eq. (23). In 2D Cartesian coordinates
we have 2
x
y
c 07 5 =6 4
Example 2 In 2D the tangent plane to the Hessian zero-crossing in scale-space is given by the following
equation in any Cartesian coordinate system:
Example 3 (Segregation of Creations and Annihilations) In a full scale-space neighbourhood
of an annihilation (creation) the following differential invariant always has a negative (positive)
value:
det
\GammafL xx [L xxy
\GammaL xy ([L xxx
The expressions are a bit complicated, but nevertheless follow straightforwardly from their condensed
covariant counterparts, which at the same time illustrates the power of the covariant formalism.
3 Conclusion and Discussion
We have described the deep structure of a scale-space image in terms of an operational scheme to
characterise, detect and localise critical points in scale-space. The characterisation pertains to local
geometrical properties of the scale-traces of individual critical points (locations, angles, directions,
velocities, accelerations), as well as to topological ones. The latter fall into two categories, local and
bilocal properties. The characteristic local property of a critical point is determined by its Hessian
signature (Morse i-saddle or top-point), which in turn defines its topological charge. The fact that pairs
of critical points of opposite charge can be created or annihilated as resolution decreases determines
bilocal connections; such pairs of critical points can be labelled according to their common fate or
cause, i.e. they can be linked to their corresponding catastrophe (annihilation, respectively creation).
This possibility to establish links is probably the most important topological feature provided by the
Gaussian scale-space paradigm.
Conceptually a scale-space representation is a continuous model imposed on a discrete set of pixel
data. The events of topological interest in this scale-space representation are clearly the top-points, and
the question presents itself whether these discrete events in turn suffice to define a complete and robust
discrete representation of the continuous scale-space image (possibly up to a trivial invariance). In the
1D case it has been proven to be possible to reconstruct the initial image data from its scale-space top-
points, at least in principle [16], but the problem of robustness and the extension to higher dimensions
is still unsolved. The solution to this problem affects multiresolution schemes for applications beyond
image segmentation, such as registration, coding, compression, etc.
Acknowledgement
James Damon of the University of North Carolina is gratefully acknowledged for his clarifications.
A Determinants and Cofactor Matrices
Definition 3 (Transposed Cofactor Matrix) Let A be a square n \Theta n matrix with components a - .
Then we define the transposed cofactor matrix e
A as follows. In order to obtain the matrix entry e a - skip
the -th column and -th row of A, evaluate the determinant of the resulting submatrix, and multiply
by (\Gamma1) - ("checkerboard pattern"). Or, using tensor notation,
e
By construction we have A e
I. Note that if the components of A are indexed by lower
indices, then by convention one uses upper indices for those of e
A (vice versa). Furthermore, it is
important for subsequent considerations to observe that the transposed cofactor matrix is always well-
defined, and that its components are homogeneous polynomial combinations of those of the original
matrix of degree n \Gamma 1. In the nonsingular case one has e
equals inverse matrix times determinant. See e.g. Strang [36]. Note that for diagonal matrices
determinants and transposed cofactor matrices are straightforwardly computed.
--R
Catastrophe Theory.
Edge focusing.
Local Morse theory for solutions to the heat equation and Gaussian blurring.
Image Structure
The intrinsic structure of optic flow incorporating measurement duality.
Catastrophe Theory for Scientists and Engineers.
Superficial and deep structure in linear diffusion scale space: Isophotes
Basic theory on normalization of a pattern (in case of typical one-dimensional pattern)
On the classification of toppoints in scale space.
Local analysis of image scale space.
Representing signals by their top points in scale-space
The structure of images.
The structure of the visual field.
A hitherto unnoticed singularity of scale-space
Solid Shape.
Dynamic shape.
Representation of local geometry in the visual system.
Receptive field families.
The structure of two-dimensional scalar fields with applications to vision
Operational significance of receptive field assemblies.
On the behaviour in scale-space of local extrema and blobs
Feature detection with automatic scale selection.
Mathematical Studies on Feature Extraction in Pattern Recognition.
Catastrophe Theory and its Applications.
Gaussian Scale-Space Theory
Linear Algebra and its Applications.
Structural Stability and Morphogenesis (translated by D.
Mathematical software for computation of toppoints.
Probabilistic multiscale image segmentation.
On the history of Gaussian scale-space axiomatics
--TR
Dynamic shape
Representation of local geometry in the visual system
Edge Focusing
A Hitherto Unnoticed Singularity of Scale-Space
Solid shape
The Gaussian scale-space paradigm and the multiscale local jet
A multiscale approach to image sequence analysis
Probabilistic Multiscale Image Segmentation
The Intrinsic Structure of Optic Flow Incorporating Measurement Duality
Topological Numbers and Singularities in Scalar Images
Feature Detection with Automatic Scale Selection
Edge Detection and Ridge Detection with Automatic Scale Selection
Scale-Space Theory in Computer Vision
Gaussian Scale-Space Theory
Linear Scale-Space has First been Proposed in Japan
Calculations on Critical Points under Gaussian Blurring
--CTR
Tomoya Sakai , Atsushi Imiya, Gradient Structure of Image in Scale Space, Journal of Mathematical Imaging and Vision, v.28 n.3, p.243-257, July 2007
Ahmed Rebai , Alexis Joly , Nozha Boujemaa, Constant tangential angle elected interest points, Proceedings of the 8th ACM international workshop on Multimedia information retrieval, October 26-27, 2006, Santa Barbara, California, USA
Bart Janssen , Frans Kanters , Remco Duits , Luc Florack , Bart Ter Romeny, A Linear Image Reconstruction Framework Based on Sobolev Type Inner Products, International Journal of Computer Vision, v.70 n.3, p.231-240, December 2006
Arjan Kuijper , Luc M. J. Florack , Max A. Viergever, Scale Space Hierarchy, Journal of Mathematical Imaging and Vision, v.18 n.2, p.169-189, March
Michael Felsberg , Remco Duits , Luc Florack, The Monogenic Scale Space on a Rectangular Domain and its Features, International Journal of Computer Vision, v.64 n.2-3, p.187-201, September 2005
Arjan Kuijper, Using Catastrophe Theory to Derive Trees from Images, Journal of Mathematical Imaging and Vision, v.23 n.3, p.219-238, November 2005
Yotam I. Gingold , Denis Zorin, Controlled-topology filtering, Proceedings of the 2006 ACM symposium on Solid and physical modeling, June 06-08, 2006, Cardiff, Wales, United Kingdom
Arjan Kuijper , Luc M. J. Florack, The Relevance of Non-Generic Events in Scale Space Models, International Journal of Computer Vision, v.57 n.1, p.67-84, April 2004
M. Felsberg , G. Sommer, The Monogenic Scale-Space: A Unifying Approach to Phase-Based Image Processing in Scale-Space, Journal of Mathematical Imaging and Vision, v.21 n.1, p.5-26, July 2004
Yotam I. Gingold , Denis Zorin, Controlled-topology filtering, Computer-Aided Design, v.39 n.8, p.676-684, August, 2007
Alfons H. Salden , Bart M. Ter Haar Romeny , Max A. Viergever, A Dynamic ScaleSpace Paradigm, Journal of Mathematical Imaging and Vision, v.15 n.3, p.127-168, November 2001 | image topology;catastrophe theory;critical points;scale-space;deep structure |
338870 | Timing Analysis for Data and Wrap-Around Fill Caches. | The contributions of this paper are twofold. First, an automatic tool-based approach is described to bound worst-case data cache performance. The approach works on fully optimized code, performs the analysis over the entire control flow of a program, detects and exploits both spatial and temporal locality within data references, and produces results typically within a few seconds. Results obtained by running the system on representative programs are presented and indicate that timing analysis of data cache behavior usually results in significantly tighter worst-case performance predictions. Second, a method to deal with realistic cache filling approaches, namely wrap-around-filling for cache misses, is presented as an extension to pipeline analysis. Results indicate that worst-case timing predictions become significantly tighter when wrap-around-fill analysis is performed. Overall, the contribution of this paper is a comprehensive report on methods and results of worst-case timing analysis for data caches and wrap-around caches. The approach taken is unique and provides a considerable step toward realistic worst-case execution time prediction of contemporary architectures and its use in schedulability analysis for hard real-time systems. | Introduction
Real-time systems rely on the assumption that the worst-case execution time
(WCET) of hard real-time tasks be known to ensure that deadlines of tasks can be
met - otherwise the safety of the controlled system is jeopardized [18, 3]. Static
Compiler
Information
Control Flow
Configurations
I/D-Cache
Interface
User
Analyzer
Timing Timing
Predictions
Addr Info
and Relative
Data Decls Virtual
Address
Information
Cache
Simulator Categorizations
I/D-Caching User
Timing
Requests
Address
Calculator
Source
Files Dependent
Machine
Information
Figure
1. Framework for Timing Predictions
analysis of program segments corresponding to tasks provides an analytical approach
to determine the WCET for contemporary architectures. The complexity
of modern processors requires a tool-based approach since ad hoc testing methods
may not exhibit the worst-case behavior of a program. This paper presents a system
of tools that perform timing prediction by statically analyzing optimized code
without requiring interaction from the user.
The work presented here addresses the bounding of WCET for data caches and
wrap-around-fill mechanisms for handling cache misses. Thus, it presents an approach
to include common features of contemporary architectures for static prediction
of WCET. Overall, this work fills another gap between realistic WCET
prediction of contemporary architectures and its use in schedulability analysis for
hard real-time systems.
The framework of WCET prediction uses a set of tools as depicted in Figure 1.
The vpo optimizing compiler [4] has been modified to emit control-flow information,
data information, and the calling structure of functions in addition to regular object
code generation. A static cache simulator uses the control-flow information and
calling structure in conjunction with the cache configuration to produce instruction
and data categorizations, which describe the caching behavior of each instruction
and data reference, respectively. The timing analyzer uses these categorizations
and the control-flow information to perform path analysis of the program. This
analysis includes the evaluation of architectural characteristics such as pipelining
and wrap-around-filling for cache misses. The description of the caching behavior
supplied by the static cache simulator is used by the timing analyzer to predict
the temporal effect of cache hits and misses overlapped with the temporal behavior
of pipelining. The timing analyzer produces WCET predictions for user selected
segments of the program or the entire program.
2. Related Work
In the past few years, research in the area of predicting the WCET of programs
has intensified. Conventional methods for static analysis have been extended from
unoptimized programs on simple CISC processors [23, 20, 9, 22] to optimized programs
on pipelined RISC processors [30, 17, 11], and from uncached architectures
to instruction caches [2, 15, 13] and data caches [24, 14, 16]. While there has been
some related work in analyzing data caching, there has been no previous work on
wrap-around-fill caches in the context of WCET prediction, to our knowledge.
Rawat [24] used a graph coloring technique to bound data caching performance.
However, only the live ranges of local scalar variables within a single function were
analyzed, which are fairly uncommon references since most local scalar variables
are allocated to registers by optimizing compilers.
Kim et al. [14] have recently published work about bounding data cache performance
for calculated references, which are caused by load and store instructions
referencing addressing that can change dynamically. Their technique uses a version
of the pigeonhole principle. For each loop they determine the maximum number
of references from each dynamic load/store instruction. They also determine the
maximum number of distinct locations in memory referenced by these instructions.
The difference between these two values is the number of data cache hits for the
loop given that there are no conflicting references. This technique efficiently detects
temporal locality within loops when all of the data references within a loop
fit into cache and the size of each data reference is the same size as a cache line.
Their technique at this time does not detect any spatial locality (i.e. when the line
size is greater than the size of each data reference and the elements are accessed
contiguously) and detects no temporal locality across different loop nests. Fur-
thermore, their approach does not currently deal with compiler optimizations that
alter the correspondence of assembly instructions to source code. Such compiler
optimizations can make calculating ranges of relative addresses significantly more
challenging.
et al. [16] have described a framework to integrate data caching into their
integer linear programming (ILP) approach to timing prediction. Their implementation
performs data-flow analysis to find conflicting blocks. However, their linear
constraints describing the range of addresses of each data reference currently have
to be calculated by hand. They also require a separate constraint for every element
of a calculated reference causing scalability problems for large arrays. No WCET
results on data caches are reported. However, their ILP approach does facilitate
integrating additional user-provided constraints into the analysis.
3. Data Caches
Obtaining tight WCETs in the presence of data caches is quite challenging. Unlike
instruction caching, addresses of data references can change during the execution of
a program. A reference to an item within an activation record could have different
addresses depending on the sequence of calls associated with the invocation of the
function. Some data references, such as indexing into an array, are dynamically
calculated and can vary each time the data reference occurs. Pointer variables in
languages like C may be assigned addresses of different variables or an address that
is dynamically calculated from the heap.
Initially, it may appear that obtaining a reasonable bound on worst-case data
cache performance is simply not feasible. However, this problem is far from hope-
less, since the addresses for many data references can be statically calculated. Static
or global scalar data references do retain the same addresses throughout the execution
of a program. Run-time stack scalar data references can often be statically
determined as a set of addresses depending upon the sequence of calls associated
with an invocation of a function. The pattern of addresses associated with many
calculated references, e.g. array indexing, can often be resolved statically.
The prediction of the WCET for programs with data caches is achieved by automatically
analyzing the range of addresses of data references, deriving relative
and then virtual addresses from these ranges, and categorizing data references according
to their cache behavior. The data cache behavior is then integrated with
the pipeline analysis to yield worst-case execution time predictions of program segments
3.1. Calculation of Relative Addresses
The vpo compiler [4] attempts to calculate relative addresses for each data reference
associated with load and store instructions after compiler optimizations have been
performed (see Figure 1). Compiler optimizations can move instructions between
basic blocks and outside of loops so that expansion of registers used in address
calculations becomes more difficult. The analysis described here is similar to the
data dependence analysis that is performed by vectorizing and parallelizing compilers
[5, 6, 7, 21, 28, 29]. However, data dependence analysis is typically performed
on a high-level representation. Our analysis had to be performed on a low-level
representation after code generation and all optimizations had been applied.
The calculation of relative addresses involves the following steps.
1. The compiler determines for each loop the set of its induction variables, their
initial values and strides, and the loop-invariant registers. 1
2. Expansion of actual parameter information is performed in order to be able to
resolve any possible address parameters later.
3. Expansion of addresses used in loads and stores is performed. Expansion is
accomplished by examining each preceding instruction represented as a register
transfer list (RTL) and replacing registers used as source values in the address
with the source of the RTL setting that register. Induction variables associated
with a loop are not expanded. Loop invariant values are expanded by proceeding
to the end of the preheader block of that loop. Expansion of the addresses of
scalar references to the run-time stack (e.g. local variables) is trivial. Expansion
of references to static data (e.g. global variables) often requires expanding loop-invariant
registers since these addresses are constructed with instructions that
may be moved out of a loop. Expansion of calculated address references (e.g.
array indexing) requires knowledge of loop induction variables. This approach
to expanding addresses provides the ability to handle non-standard induction
variables. We are not limited to simple induction variables in simple for loops
that are updated only at the head of the loop.
Consider the C source code, RTLs and SPARC assembly instructions in Figure 2
for a simple Initialize function. The code in Initialize goes through elements
1 to 51 in both array A and array B and initializes them to random integers. Note
that although delay slots are actually filled by the compiler, they have not been
filled when compiling the code for most of the figures in this paper, in order to
simplify the examples for the reader. 2
7. add %l2,%lo(_A),%i3
8. add %l4,%lo(_B),%i4
9. add %i4,204,%i0
r[14]=SV[r[14]+-96]; # 1. save %sp,(-96),%sp
2. sethi %hi(_B),%l4
5. sethi %hi(10200),%i2
st
st %o0,[%l4]
store of A[i][j]
store of B[i][j]
int B[MAX][MAX];
int A[MAX][MAX];
#define MAX 50
{
int i, j;
for (i=1; i<MAX; i++)
for (j=1; j<MAX; j++)
{ L37
Figure
2. Example C Function, RTLs, and SPARC Assembly for Function Initialize
The first memory address, r[20] (instruction 22), is for the store of
A[i][j]. The second memory address, r[20] (instruction 23), is for the store of
B[i][j]. Register r[20] is the induction register for the inner loop (instructions 19-
27) and thus cannot be expanded. It has an initial value, a stride, and a maximum
and minimum number of iterations associated with it, these having been computed
and stored earlier in the compilation process. 3 The initial value for r[20] consists
of the first element accessed (base address of B plus 4) plus the offset that comes
from computing the row location, that of the induction variable for the outer loop,
r[25]. The stride is 4 and the minimum and maximum number of iterations are
the same, 50. Once the initial value, stride, and number of iterations are available,
there is enough information to compute the sequence of addresses that will be
accessed by the store of B[i][j]. Knowing that both references have the same
stride, the compiler used index reduction to avoid having to use another induction
register for the address computation for A[i][j], since it shares the same loop
control variables as that for B.
The memory address r[20] + r[21] for A[i][j] includes the address for B
(r[20]) plus the difference between the two arrays (r[21]). This can be seen
from the following sequence of expansions and simplifications. Remember that register
r[20] cannot be immediately expanded since it is an induction register for the
inner loop, so the expansion continues with register r[21] as follows. Note that
register r[25] will not be expanded either since it is the induction variable for the
outer loop (instructions 13-31).
1. load at 22
2. r[20]+(r[21]-r[22]) # from 17
3. r[20]+(r[21]-(r[25]+r[28])) # from
4. r[20]+(r[25]+r[27]-(r[25]+r[28])) # from 15
5. r[20]+(r[25]+r[27]-(r[25]+r[20]+LO[ B])) # from 8
6. r[20]+(r[25]+(r[18]+LO[ A])-(r[25]+r[20]+LO[ B])) # from 7
7. r[20]+(r[25]+(HI[ A]+LO[ A])-(r[25]+r[20]+LO[ B])) # from 3
8. r[20]+(r[25]+(HI[ A]+LO[ A])-(r[25]+HI[ B]+LO[ B])) # from 2
The effect of this expansion is simplified in the following steps.
9. r[20]+(r[25]+( A)-(r[25]+ B)) # eliminate HI and LO
10. r[20]+(r[25]+ A-(r[25]+ B)) # remove unnecessary ()'s
11. remove ()'s and distribute +'s and -'s
12. remove negating terms
Thus, we are left with the induction register r[20] plus the difference between the
two arrays. The simplified address expression string is then written to a file containing
data declarations and relative address information. When the address calculator
attempts to resolve this string to an actual virtual address, it will use the initial
value of r[20], which is r[25]+ B+4, and the B's will cancel out (r[25]+ B+4+ A- B
A). Note this gives the initial address of the row in the A array. The
range of relative addresses for this example can be depicted algorithmically as shown
in
Figure
3. For more details on statically determining address information from
fully optimized code see [26].
for
for
address-A r[25]r[20] -
Figure
3. Algorithmic Range of Relative Addresses for the Load in Figure 2
startup code
program code
segment
static
data
run-time
program stack
initial stack
Figure
4. Virtual Address Space (SunOS)
3.2. Calculation of Virtual Addresses
Calculating addresses that are relative to the beginning of a global variable or an
activation record is accomplished within the compiler since much of the data flow
information required for this analysis is also used during compiler optimizations.
However, calculating virtual addresses cannot be done in the compiler since the
analysis of the call graph and data declarations across multiple files is required.
Thus, an address calculator (see Figure 1) uses the relative address information in
conjunction with control-flow information to obtain virtual addresses.
Figure
4 shows the general organization of the virtual address space of a process
executing under SunOS. There is some startup code preceding the instructions
associated with the compiled program. Following the program code segment is the
static data, which is aligned on a page boundary. The run-time stack starts at
high addresses and grows toward low addresses. Part of the memory between the
run-time stack and the static data is the heap, which is not depicted in the figure
since addresses in the heap could not be calculated statically by our environment.
Static data consists of global variables, static variables, and non-scalar constants
(e.g. strings and floating-point constants). In general, the Unix linker (ld) places
the static data in the same order that the declarations appeared within the assembly
files. Also, static data within one file will precede static data in another file specified
later in the list of files to be linked. (There are some exceptions to these rules
depending upon how such data is statically initialized.)
In addition, padding between variables sometimes occurs. For instance, variables
declared as int and double on a SPARC are aligned on word and double-word
boundaries, respectively. In addition, the first static or global variable declared
in each of the source files comprising the program is aligned on a double-word
boundary.
Run-time stack data includes temporaries and local variables not allocated to
registers. Some examples of temporaries include parameters beyond the sixth word
passed to a function and memory used to move values between integer and floating-point
registers since such movement cannot be accomplished directly on a SPARC.
The address of the activation record for a function can vary depending upon the
actual sequence of calls associated with its activation. The virtual address of an
activation record containing a local variable is determined as the sum of the sizes
of the activation records associated with the sequence of calls along with the initial
run-time stack address. The address calculator (along with the static simulator and
timing analyzer) distinguishes between different function instances and evaluates
each instance separately. Once the static data names and activation records of
functions are associated with virtual addresses, the relative address ranges can be
converted into virtual address ranges.
Only virtual addresses have been calculated so far. There is no guarantee that
a virtual address will be the same as the actual physical address, which is used to
access cache memory on most machines. In this paper we assume that the system
page size is an integer multiple of the data cache size, which is often the case. For
instance, the MicroSPARC I has a 4KB page size and a 2KB data cache. Thus,
both a virtual and corresponding physical address would have the same relative
offset within a page and would map to the same line within the data cache.
3.3. Static Simulation to Produce Data Reference Categorizations
The method of static cache simulation is used to statically categorize the caching
behavior of each data reference in a program for a specified cache configuration (see
Figure
1). A program control-flow graph is constructed that includes the control
flow within each function, and a function instance graph which uniquely identifies
each function instance by the sequence of call sites required for its invocation. This
program control-flow graph is analyzed to determine the possible data lines that can
be in the data cache at the entry and exit of each basic block within the program
[19].
The iterative algorithm used for static instruction cache simulation [2, 19] is
not sufficient for static data cache simulation. The problem is that the calculated
references can access a range of possible addresses. At the point that the data access
occurs, the data lines associated with these addresses may or may not be brought
in cache, depending upon how many iterations of the loop have been performed
at that point. To deal with this problem a new state was created to indicate
whether or not a particular data line could potentially be in the data cache due to
WHILE any change DO
FOR each basic block instance B DO
input lines
input
FOR each immed pred P of B DO
input state(B) += output state(P)
calc input state(B) += output state(P)
IF P is in another loop THEN
input state(B) += calc output state(P) data lines(remaining in that loop)
output
FOR each data reference D in B DO
IF D is scalar reference THEN
output state(B) += data line(D)
output state(B) -= data lines(D conflicts with)
calc output state(B) += data line(D)
calc output state(B) -= data lines(conflicts with)
output state(B) -= data lines(D could conflict with)
calc output state(B) += data lines(D could access)
calc output state(B) -= data lines(D could conflict with)
Figure
5. Algorithm to Calculate Data Cache States
calculated references. When an immediate predecessor block is in a different loop
(the transition from the predecessor block to the current block exits a loop), the
data lines associated with calculated references in that loop that are guaranteed to
still be in cache are unioned into the input cache state of that block. The iterative
algorithm in Figure 5 is used to calculate the input and output cache states for
each basic block in the program control flow.
Once these cache state vectors have been produced, they are used to determine
whether or not each of the memory references within the bounded virtual address
range associated with a data reference will be in cache. The static cache simulator
needs to produce a categorization of each data reference in the program. The four
worst-case categories of caching behavior used in the past for static instruction
cache simulation [2] were as follows. (1) Always Miss (m): The reference is not
guaranteed to be in cache. (2) Always Hit (h): The reference is guaranteed to
always be in cache. (3) First Miss (fm): The reference is not guaranteed to be in
cache the first time it is accessed each time the loop is entered, but is guaranteed
thereafter. (4) First Hit (fh): The reference is guaranteed to be in cache the first
time it is accessed each time the loop is entered, but is not guaranteed thereafter.
These categorizations are still used for scalar data references.
int a[100][100]; int a[100][100];
row order sum */ main() - /* column order sum */
int i, j,
for
sum += a[i][j]; sum += a[i][j];
row
(a) Detecting Spatial Locality
int i, j,
for
sum += a[i]; /* a[i] is ref 1 */
for
if (a[i] == b[j]) /* a[i] is ref 2 and b[i] is ref 3 */
ref 1: c 13 from [m h m h h h m h h h m h h h . m h h h]
2: h from [h h . h h] due to temporal locality across loops.
ref 3: c 13 13 from [m h h m h h h . m h] on first execution of inner loop,
and [h h h h . h] on all successive executions of it.
(b) Detecting Temporal Locality across and within Loops
Figure
6. Examples for Spatial and Temporal Locality
To obtain the most accuracy, a worst-case categorization of a calculated data
reference for each iteration of a loop could be determined. For example, some
categorizations for a data reference in a loop with 20 iterations might be as follows:
With such detailed information the timing analyzer could then accurately determine
the worst-case path on each iteration of the loop. However, consider a loop
with 100,000 iterations. Such an approach would be very inefficient in space (stor-
ing all of the categorizations) and time (analyzing each loop iteration separately).
The authors decided to use a new categorization called Calculated (c) that would
also indicate the maximum number of data cache misses that could occur at each
loop level in which the data reference is nested. The previous data reference categorization
string would be represented as follows (since there is only one loop level
The order of access and the cache state vectors are used to detect cache hits within
calculated references due to spatial locality. Consider the two code segments in
Figure
6(a) that sum the elements of a two dimensional array. The two code
segments are equivalent, except that the left code segment accesses the array in
row order and the right code segment uses column order (i.e., the for statements
are reversed). Assume that the scalar variables (i, j, sum, and same) are allocated
to registers. Also, assume the size of the direct-mapped data cache is 256 bytes
with lines containing 16 bytes each. Thus, a single row of the array a
requiring 400 bytes cannot fit into cache. The static cache simulator was able to
detect that the load of the array element in the left code segment had at most one
miss for each of the array elements that are part of the same data line. This was
accomplished by inspecting the order in which the array was accessed and detecting
that no conflicting lines were accessed in these loops. The categorizations for the
load data reference in the two segments are given in the same figure. Note in this
case that the array happens to be aligned on a line boundary. The specification of
a single categorization for a calculated reference is accomplished in two steps for
each loop level after the cache states are calculated. First, the number of references
(iterations) performed in the loop is retrieved. Next, the maximum number of
misses that could occur for this reference in the loop is determined. For instance,
at most 25 misses will occur in the innermost loop for the left code segment. The
static cache simulator determined that all of the loads for the right code segment
would result in cache misses. Its data caching behavior can simply be viewed as an
always miss. Thus, the range of 10,000 different addresses referenced by the load
are collapsed into a single categorization of c 25 2500 (calculated reference with
25 misses at the innermost level and 2500 misses at the outer level) for the left code
segment and an m (always miss) for the right code segment.
Likewise, cache hits from calculated references due to temporal locality both
across and within loops are also detected. Consider the code segment in Figure 6(b).
Assume a cache configuration with lines (512 byte cache) so that both
arrays a and b requiring 400 bytes total (200 each) fit into cache. Also assume the
scalar variables are allocated to registers. The accesses to the elements of array a
after the first loop were categorized as an h (always hit) by the static simulator
since all of the data lines associated with array a will be in the cache state once
the first loop is exited. This shows the detection of temporal locality across loops.
After the first complete execution of the inner loop, all the elements of b will be
in cache, so then all references to it on the remaining executions of the inner loop
are also categorized as hits. Thus, the categorization of c 13 13 is given. Relative
to the innermost loop, 13 misses are due to bringing b into cache during the first
complete execution of the inner loop. There are also only 13 misses relative to the
outermost loop since b will be completely in cache on each iteration after the first.
Thus, temporal locality is also detected within loops.
The current implementation of the static data cache simulator (and timing an-
alyzer) imposes some restrictions. First, only direct-mapped data caches are
supported. Obtaining categorizations for set-associative data caches can be accomplished
in a manner similar to that described in other work on instruction
caches [27]. Second, recursive calls are not allowed since it would complicate the
generation of unique function instances. Third, indirect calls are not allowed since
an explicit call graph must be generated statically.
3.4. Timing Analysis
The timing analyzer (see Figure 1) utilizes pipeline path analysis to estimate the
WCET of a sequence of instructions representing paths through loops or functions.
Pipeline information about each instruction type is obtained from the machine-dependent
data file. Information about the specific instructions in a path is obtained
from the control-flow information files. As each instruction is added separately to
the pipeline state information, the timing analyzer uses the data caching categorizations
to determine whether the MEM (data memory access) stage should be
treated as a cache hit or a miss.
The worst-case loop analysis algorithm was modified to appropriately handle
calculated data reference categorizations. The timing analyzer will conservatively
assume that each of the misses for the current loop level of a calculated reference
has to occur before any of its hits at that level. In addition, the timing analyzer
is unable to assume that the penalty for these misses will overlap with other long
running instructions since the analyzer may not evaluate these misses in the exact
iterations in which they occur. Thus, each calculated reference miss is always
viewed as a hit within the pipeline path analysis and the maximum number of cycles
associated with a data cache miss penalty is added to the total time of the path.
This strategy permits an efficient loop analysis algorithm with some potentially
small overestimations when a data cache miss penalty could be overlapped with
other stalls.
The worst-case loop analysis algorithm is given in Figure 7. The additions to the
previously published algorithm [11] to handle calculated references are shown in
boldface. Let n be the maximum number of iterations associated with a given loop.
The WHILE loop terminates when the number of processed iterations reaches n
no more first misses, first hits, or calculated references are encountered as
misses, hits, and misses, respectively. This WHILE loop will iterate no more than
the minimum of (n - 1) or (p + r) times, where p is the number of paths and r is
the number of calculated reference load instructions in the loop.
The algorithm selects the longest path for each loop iteration [11, 10]. In order to
demonstrate the correctness of the algorithm, one must show that no other path for
a given iteration of the loop will produce a longer time than that calculated by the
algorithm. Since the pipeline effects of each of the paths are unioned, it only remains
to be shown that the caching effects are treated properly. All categorizations are
treated identically on repeated references, except for first misses, first hits, and
calculated references. Assuming that the data references have been categorized
correctly for each loop and the pipeline analysis was correct, it remains to be shown
that first misses, first hits, and calculated references are interpreted appropriately
for each loop iteration. A correctness argument about the interpretation of first
hits and first misses is given in [2].
total
pipeline first misses encountered = first hits encountered = NULL.
Find the longest continue path.
first misses encountered += first misses that were misses in this path.
first hits encountered += first hits that were hits in this path.
IF first miss or first hit encountered in this path THEN
curr iter += 1.
Subtract 1 from the remaining misses of each calculated reference in this path.
Concatenate pipeline info with the union of the info for all paths.
total cycles += additional cycles required by union.
ELSE IF a calculated reference was encountered in this path as a miss THEN
the minimum of the number of remaining misses of each
calculated reference in this path that is nonzero.
curr iter += min misses.
Subtract min misses from the remaining misses of each calc ref in this path
Concatenate pipeline info with the union of info for all paths min misses times.
total cycles += min misses * (additional cycles required by union).
break.
Concatenate pipeline info with the union of pipeline info for all paths (n - 1 - curr iter) times.
total cycles += (n - 1 - curr iter) * (additional cycles required by union).
FOR each set of exit paths that have a transition to a unique exit block DO
Find the longest exit path in the set.
first misses encountered += first misses that were misses in this path.
first hits encountered += first hits that were hits in this path.
Concatenate pipeline info with the union of the info for all exit paths in the set.
total cycles += additional cycles required by exit union.
Store this information with the exit block for the loop.
Figure
7. Worst-Case Loop Analysis Algorithm
The WHILE loop will subtract one from each calculated reference miss count for
the current loop in the longest path chosen on each iteration whenever there are first
misses or first hits encountered as misses or hits, respectively. Once no such first
misses and first hits are encountered in the longest path, the same path will remain
the longest path as long as its set of calculated references that were encountered as
misses continue to be encountered as misses since the caching behavior of all of the
references will be treated the same. Thus, the pipeline effects of this longest path
are efficiently replicated for the number of iterations associated with the minimum
number of remaining misses of the calculated references that are nonzero within
the longest path. After the WHILE loop, all of the first misses, first hits, and
calculated references in the longest path will be encountered as hits, misses, and
hits, respectively. The unioned pipeline effects after the WHILE loop will not
change since the caching behavior of the references will be treated the same. Thus,
the pipeline effects of this path are efficiently replicated for all but one of the
remaining iterations. The last iteration of the loop is treated separately since the
longest exit path may be shorter than the longest continue path.
A correctness argument about the interpretation of calculated references needs to
show that the calculated references are treated as misses the appropriate number
of times. The algorithm treats a calculated reference as a miss until its specified
number of calculated misses for the loop is exhausted. The IF-THEN portion of
the WHILE loop subtracts one from each calculated reference miss count since only
a single iteration is analyzed and each calculated reference can only miss once in
a given loop iteration. The ELSE-IF-THEN portion of the WHILE loop subtracts
the minimum of the misses remaining in any calculated reference for that path and
the number of iterations remaining in the loop. The number of iterations analyzed
is again the same as the number of misses subtracted for each calculated reference.
Since the misses for the calculated references are evaluated before the hits, the
interpretation of calculated references will not underestimate the actual number of
calculated misses given that the data references have been categorized correctly.
An example is given in Figure 8 to illustrate the algorithm. The if statement
condition was contrived to force the worst-case paths to be taken when executed.
Assume a data cache line size of 8 bytes and enough lines to hold all three arrays in
cache. The figure also shows the iterations when each element of each of the three
arrays will be referenced and whether or not each of these references will be a hit
or a miss. Two different paths can be taken through the loop on each iteration as
shown in the integer pipeline diagram of Figure 8. Note that the pipeline diagrams
reflect that the loads of the array elements were found in cache. The miss penalty
from calculated reference misses is simply added to the total cycles of the path and
is not directly reflected in the pipeline information since these misses may not occur
in the same exact iterations as assumed by the timing analyzer.
Table
1 shows the steps the timing analyzer uses from the algorithm given in
Figure
7 to estimate the WCET for the loop in the example shown in Figure 8.
The longest path detected in the first step is Path A, which contains references
to k[i] and c[i]. The pipeline time required 20 cycles and the misses for the
two calculated references (k[i] and c[i]) required cycles. Note that each miss
penalty was assumed to require 9 cycles. Since there were no first misses, the timing
analyzer determines that the minimum number of remaining misses from the two
calculated references is 13. Thus, the path is replicated an additional 12 times.
The overlap between iterations is determined to be 4 cycles. Note that 4 is not
subtracted from the first iteration since any overlap for it would be calculated when
determining the worst-case execution time of the path through the main function.
The total time for the first 13 iterations will be 446. The longest path detected in
step 2 is also Path A. But this time all references to c[i] are hits. There are 37
remaining misses to k[i]. The total time for iterations 14 through 50 is 925 cycles.
The longest path detected in step 3 is Path B, which has 25 remaining misses to
s[i]. This results in 550 additional cycles for iterations 51 through 75. After step
3 the worst-case loop analysis has exited the WHILE loop in the algorithm. Step
for
{
char c[100];
short s[100];
int k[100];
int i, sum;
sum += k[i]+c[i];
else
sum += s[i];
Path B: Blocks 2, 4, & 5
Path A: Blocks 2,3, & 5
Paths in the loop:
load of s[i]
load of c[i]
load of k[i]
ld [%l0],%o1
Instructions 1 through 11
Instructions 12 through 15
Block 4
Instructions 23 through 28
Instructions 29 through
Block 5
Block 6
data line 0 data line 1 data line 2 data line 3 data line 50 data line 51 data line 76 data line 77
k:
missmiss hit miss miss hit miss miss
result:
accessed: iteration
elements: array
data lines:
s: c:
k[i]: c 50 from [m h m h . m h
s[i]: c 25 from [m h h h m h h h . m h h h]
c 13 from [m h h h h h
26 28
26 28
26 28
26 28
ID
IF
cycle
stage121213141515
22 2223 2323
28
28
26 28
Pipeline Diagram for Path B: Instructions 12-15 and 21-28 (blocks 2,4,5)
ID
IF
cycle
19 232324252526272828
Pipeline Diagram for Path A: Instructions 12-20 and 23-28 (blocks 2,3,5)
Figure
8. Example to Illustrate Worst-Case Loop Analysis Algorithm
4 calculates 384 cycles for the next 24 iterations (76-99). Step 5 calculates the
last iteration to require 16 cycles. The timing analyzer calculates the last iteration
separately since the longest exit path may be shorter than other paths in the loop.
The total number of cycles calculated by the timing analyzer for this example was
identical to the number obtained by execution simulation.
A timing analysis tree is constructed to predict the worst-case performance. Each
node of the tree represents either a loop or a function in the function instance graph,
where each function instance is uniquely identified by the sequence of calls resulting
in its invocation. The nodes representing the outer level of function instances are
treated as loops that will iterate only once. The worst-case time for a node is
not calculated until the time for all of its immediate child nodes are known. For
Table
1. Timing Analysis Steps for the loop in Figure 8
start longest total
step iter path cycles min misses iters additional cycles cycles
instance, consider the example shown in Figure 8 and Table 1. The timing analyzer
would calculate the worst-case time for the loop and use this information to next
calculate the time for the path in main that contains the loop (block 1, loop, block
6). The construction and processing of the timing analysis tree occurs in a similar
manner as described in [2, 11].
3.5. Results
Measurements were obtained on code generated for the SPARC architecture by
the vpo optimizing compiler [4]. The machine-dependent information contained
the pipeline characteristics of the MicroSPARC I processor [25]. A direct-mapped
data cache containing 16 lines of 32 bytes for a total of 512 bytes was used. The
MicroSPARC I uses write-through/no-allocate data caching [25]. While the static
simulator was able to categorize store data references, these categorizations were
ignored by the timing analyzer since stores always accessed memory and a hit or
miss associated with a store data reference had the same effect on performance.
Instruction fetches were assumed to be all hits in order to isolate the effects of data
caching from instruction caching.
Table
2 shows the test programs used to assess the timing analyzer's effectiveness
of bounding worst-case data cache performance. Note that these programs were
restricted to specific classes of data references that did not include any dynamic
allocation from the heap. Two versions were used for each of the last five test
programs. The a version had the same size arrays that were used in previous
studies [2, 11]. The b version of each program used smaller arrays that would
totally fit into a 512 byte cache. The number of bytes reported in the table is the
total number of bytes of the variables in the program. Note that some of these
bytes will be in the static data area while others will be in the run-time stack. The
amount of data is not changed for the program Des since its encryption algorithm
is based on using large static arrays with preinitialized values.
Table
3 depicts the dynamic results from executing the test programs. The hit
ratios were obtained from the data cache execution simulation. Only Sort had
very high data cache hit ratios due to many repeated references to the same array
elements. The observed cycles were obtained using an execution simulator, modified
from [8], to simulate data cache and pipeline affects and count the number of
cycles. The estimated cycles were obtained from the timing analyzer discussed in
Table
2. Test Programs for Data Caching
Name Bytes Description or Emphasis
Des 1346 Encrypts and Decrypts 64 bits
Matcnta 40060 Count and Sum Values in a 100x100 Int Matrix
Matcntb 460 Count and Sum Values in a 10x10 Int Matrix
Matmula 30044 Multiply 2 50x50 Matrices into a 50x50 Int Matrix
Matmulb 344 Multiply 2 5x5 Matrices into a 5x5 Int Matrix
Matsuma 40044 Sum Values in a 100x100 Int Matrix
Matsumb 444 Sum Values in a 10x10 Int Matrix
Sorta 2044 Bubblesort of 500 Int Array
444 Bubblesort of 100 Integer Array
Section 3.4. The estimated ratio is the quotient of these two values. The naive ratio
was calculated by assuming all data cache references to be misses and dividing those
cycles by the observed cycles.
The timing analyzer was able to tightly predict the worst-case number of cycles
required for pipelining and data caching for most of the test programs. In fact, for
five of them, the prediction was exact or over by less that one-tenth of one percent.
The inner loop in the function within Sort that sorted the values had a varying
number of iterations that depends upon a counter of an outer loop. The number
of iterations performed was overrepresented on average by about a factor of two
for this inner loop. The strategy of treating a calculated reference miss as a hit in
the pipeline and adding the maximum number of cycles associated with the miss
penalty to the total time of the path caused overestimations with the Statsa and
Statsb programs, which were the only floating-point intensive programs in the test
set. Often delays due to long-running floating-point operations could have been
overlapped with data cache miss penalty cycles. Matmula had an overestimation
Table
3. Dynamic Results for Data Caching
Hit Observed Estimated Est/Obs Naive
Name Ratio Cycles Cycles Ratio Ratio
Des 75.71% 155,340 191,564 1.23 1.45
Matcnta 71.86% 1,143,014 1,143,023 1.00 1.15
Matcntb 70.73% 12,189 12,189 1.00 1.15
Matmula 62.81% 7,245,830 7,952,807 1.10 1.24
Matmulb 89.40% 11,396 11,396 1.00 1.33
Matsuma 71.86% 1,122,944 1,122,953 1.00 1.15
Matsumb 69.98% 11,919 11,919 1.00 1.15
Sorta 97.06% 4,768,228 9,826,909 2.06 2.88
average 80.75% N/A N/A 1.24 1.55
of about 10% whereas the smaller data version Matmulb was exact. The Matmul
program has repeated references to the same elements of three different arrays.
These references would miss the first time they were encountered, but would be in
cache for the smaller Matmulb when they were accessed again since the arrays fit
entirely in cache. When all the arrays fit into cache there is no interference between
them. However, when they do not fit into cache the static simulator conservatively
assumes that any possible interference must result in a cache miss. Therefore, the
categorizations are more conservative and the overestimation is larger. Finally, the
Des program has several references where an element of a statically initialized array
is used as an index into another array. There is no simple method to determine
which value from it will be used as the index. Therefore, we must assume that any
element of the array may be accessed any time the data reference occurs in the
program. This forces all conflicting lines to be deleted from the cache state and
the resulting categorizations to be more conservative. The Des program also has
overestimations due to data dependencies. A longer path deemed feasible by the
timing analyzer could not be taken in a function due to the value of a variable.
Despite the relatively small overestimations detailed above, the results show that
with certain restrictions it is possible to tightly predict much of the data caching
behavior of many programs.
The difference between the naive and estimated ratios shows the benefits for
performing data cache analysis when predicting worst-case execution times. The
benefit of worst-case performance from data caching is not as significant as the benefit
obtained from instruction caching [2, 11]. An instruction fetch occurs for each
instruction executed. The performance benefit from a write-through/no-allocate
data cache only occurs when the data reference from a load instruction is determined
by the timing analyzer to be in cache. Load instructions only comprised on
average 14.28% of the total executed instructions for these test programs. However,
the results do show that performing data cache analysis for predicting worst-case
execution time does still result in substantially tighter predictions. In fact, for the
programs in the test set the prediction improvement averages over 30%.
The performance overhead associated with predicting WCETs for data caching
using this method comes primarily from that of the static cache simulation. The
time required for the static simulation increases linearly with the size of the data.
However, even with large arrays as in the given test programs this time is rather
small. The average time for the static simulation to produce data reference categorizations
for the 11 programs given in Table 3 was only 2.89 seconds. The overhead
of the timing analyzer averages to 1.05 seconds.
4. Wrap-Around-Filling for Instruction Cache Misses
Several timing tools exist that address the hit/miss behavior of an instruction cache.
But modern instruction caches often employ various sophisticated approaches to
decrease the miss rate or reduce the miss penalty [12]. One approach to reduce
the miss penalty in an instruction cache is wrap-around fill. A processor employing
this feature will load a cache line one word at a time, starting with the instruction
Table
4. Order of Fill When Loading Words of a Cache Line
First Requested Word Miss Delay for Word
within Cache Line
that caused the cache miss. For each word in the program line that is being loaded
into the cache, the associated instruction cannot be fetched until its word has
been loaded. The motivation for wrap-around fill is to let the CPU proceed with
this instruction and allow the pipelined execution to continue while subsequent
instructions are loaded into cache. Thus, the benefit is that it is not necessary to
wait for the entire cache line to be loaded before proceeding with the execution of
the fetched instruction. However, this feature further complicates timing analysis
since it can introduce dead cycles into the pipeline analysis during those cycles when
no instruction is being loaded into cache [25]. Wrap-around fill is used on several
recent architectures, including the Alpha AXP 21064, the MIPS R10000 and the
IBM 620.
Table
4 shows when words are loaded into cache on the MicroSPARC I processor
[25]. In each instruction cache line there are eight words, hence eight instructions.
The rows of the table are distinguished by which word w within a cache line was
requested when the entire line was not found in cache. The leftmost column shows
that any of the words 0-7 can miss and become the first word in its respective program
line to be loaded into cache. It takes seven cycles for the requested instruction
to reach the instruction cache. During the eighth cycle, the word with which w is
even or w - 1 if w is odd, gets loaded into cache. After
each pair of words is loaded into cache, there is a dead cycle during which no word
is written. Table 6 indicates that the MicroSPARC I has dead cycles during the
ninth, twelfth and fifteenth cycles after a miss occurs. It takes seventeen cycles for
an entire program line to be requested from memory and completely loaded into
cache. Note that on the MicroSPARC I there is an additional requirement that an
entire program line must be completely loaded into cache before a different program
line can be accessed (whether or not that other program line is already in cache).
For wrap-around-fill analysis, information stored with each path and loop includes
the program line number and the cycles during which the words of the loop's first
and last program lines are loaded into cache. These cycles are called the available
times, and the timing analyzer calculates beginning and ending available times for
each path in a particular loop. For loop analysis, this set of beginning and ending
information is propagated along with the worst-case path's pipeline requirements
and data hazard information. Keeping track of when the words of a program line
are available in cache is analogous to determining when a particular pipeline stage is
last occupied and to detecting when the value of a register is available via hardware
forwarding. These available times are used to carry out wrap-around-fill analysis of
paths and loops. This analysis detects the delays associated with dead cycles and
cases where these delays can be overlapped with pipeline stalls to produce a more
accurate WCET prediction.
4.1. Wrap-Around-Fill Delays Within a Path
During the analysis of a single path of instructions, it is necessary to know when the
individual words of a cache line will be loaded with the appropriate instructions.
When the timing analyzer processes an instruction that is categorized as a miss, it
can automatically determine when each of the instructions in this program line will
be loaded into cache, according to the order of fill given in the machine-dependent
information. The timing analyzer stores the program line number and the relative
word number in that line for every instruction in the program. During the analysis
of a path, the timing analyzer can update information about which program line
is arriving into cache and when the words of that line are available to be fetched
without any delay. At the point the timing analyzer is finished examining a path, it
will store the information associated with the first and last program lines referenced
in this path, including the cycles during which words in these lines become available
in cache, plus the amount of delay caused by latencies from the filling of cache lines.
Such information will be useful when the path is evaluated in a larger context,
namely when the first iteration of a loop or a path in a function is entered or called
from another part of the program.
Figure
9 shows the algorithm that is used to determine the number of cycles
associated with a instruction fetch while analyzing a path. The cycles when the
words become available in the last line fetched are calculated on each miss. In order
to demonstrate the correctness of the algorithm, one must show that the required
number of cycles are calculated for the wrap-around-fill delay on each instruction
fetch. There are three possible cases. The first case is when the instruction being
fetched is in the last line fetched, which means the instruction fetch must be a hit.
Line 4 in the algorithm uses the arrival time of the associated word containing the
instruction to determine if extra cycles are needed for the IF stage. The second
and third cases are when the reference was not in the last line fetched and the
instruction fetch could be a hit or a miss, respectively. In either case, cycles for
the IF stage of the instruction have to include the delay to complete the loading of
the last line, which is calculated at line 6. Line 8 calculates the additional cycles
for the IF stage required for a miss to load the requested word in the line. Lines
9-10 establish the arrival times of the words in the line when there is a miss. All
three cases are handled. Thus, the algorithm is correct given that arrival times of
the current line preceding the first instruction in the path are accurate. Techniques
to determine the arrival times at the point a path is entered are described in the
following sections.
// matrix containing information from Table 4
const int waf delay[WORDS PER LINE][WORDS PER LINE]
indicates when each word of the last line fetched become available
int available[WORDS PER LINE]
const int max delay // delay required to load the last word of a line
1: curr word inst word num % WORDS PER LINE.
2: first cycle = first vacancy of IF stage.
3: IF instruction in last line fetched THEN
4: cycles in first cycle).
5: ELSE
cycles in previous line delay = max(0,last word avail - first cycle).
7: IF reference was a miss THEN
8: cycles in IF += waf delay[curr word num][curr word num].
previous line delay
last word previous line delay
12: last line fetched = line of current instruction.
13: cycles in IF += 1.
Figure
9. Algorithm to Calculate WAF Delay within a Path
4.2. Delays Upon Entering A Loop or Function
During path analysis, when the timing analyzer encounters a loop or a function call
in that path (child), it determines if the first instruction in the child lies in a different
program line than the instruction executed immediately before entering the loop
or function. If it does, then the first instruction in the child must be delayed from
being fetched if the program line containing the last instruction executed before
the child is still loading into cache. If the two instructions lie in the same program
line, then it is only necessary to ensure that the instructions belonging to the first
program line in the child will be available when fetched. Often, these available
times (and corresponding dead cycle delays) have already been calculated by the
child. Likewise, the available times could have been calculated in some other child
encountered earlier in the current path, i.e. in the situation where the path calls
two functions that share a program line.
Figure
shows a small program containing a loop that has ten iterations and
comprises instructions 5-9. The first instruction in the loop is instruction 5, and
in memory this instruction is located in the same program line (0) as instructions
0-4. At the beginning of the program execution, instruction 0 misses in cache and
causes program line 0, containing instructions 0-7, to load into cache. On the first
iteration of the loop, the timing analyzer detects that instruction 5 only needs to
spend 1 cycle in the IF stage; there is no dead cycle associated with instruction 5
even though program line 0 is in the process of still being fetched into cache during
the first iteration.
Categorizations
fm->fm
Prog. Line000011135713
Word Number
ID
IF
cycle
stage
Pipeline
Instructions
inst 0: save %sp,-104,%sp
inst 1: mov %g0,%o3
inst 2: add %sp,.1_a,%o0
inst 3: mov %o0,%o4
inst 4: add %o0,40,%o5
inst
inst 7: cmp %o4,%o5
inst 8: bl L16
inst 10: ret
inst 9: add %o3,1,%o3
inst 11: restore %g0,%g0,%o0
inst 5: st %o3,[%o4]
C Source Code
{
int i, a[10];
for
return 0;
Figure
10. Example Program and Pipeline Diagram for First 2 Loop Iterations
4.3. Delays Between Loop Iterations
In the loop analysis algorithm, it is important to detect any delay that may result
from a program line being loaded into cache late in the previous iteration that
causes the subsequent iteration to be delayed. For example, consider again the
program in Figure 10. The instruction cache activity can be inferred by how long
various instructions occupy the IF stage. Before timing analysis begins, the static
cache simulator [19] had determined that instruction 8 is a first miss. The pipeline
behavior of the first two iterations of the loop is given in the pipeline diagram in
Figure
10. The instruction cache activity can often be inferred by how long various
instructions occupy the IF stage. On the first iteration, instruction 6 is delayed
in the IF stage during cycle 16 because of the dead cycle that occurs when its
program line is being loaded into cache. Note that if instruction 6 had not been
delayed, it would have later been the victim of a structural hazard when instruction
5 occupies the MEM stage for cycles 18-19. 4 Thus, the dead cycle delay overlaps
with the potential pipeline stall. Later during the first iteration, instruction 8 is a
miss, so it must spend a total of 8 cycles in the IF stage. Program line 1 containing
instructions 8-11 (and 12-15 if they existed) takes (from cycle 19 to cycle
36) to be completely loaded from the time instruction 8 is referenced. That is,
the program line finishes loading during cycle 36, so instruction 5 on the second
iteration cannot be fetched until cycle 37. This is a situation where a delay due to
jumping to a new program line takes place between loop iterations.
4.4. Results
The results of evaluating the same test programs as in section 3.5 are shown in
Table
5. The fifth column of the table gives the ratio of the estimated cycles to
the observed cycles when the timing analyzer was executed with wrap-around-fill
analysis enabled. The sixth column shows a similar ratio of estimated to observed
cycles, but this time with wrap-around-fill analysis disabled. In this mode, the
timing analyzer assumes a constant penalty of 17 cycles for each instruction cache
miss, which is the maximum miss penalty an instruction fetch would incur. The
fetch delay actually attains this maximum only when there are consecutive misses,
as in the case when there is a call to a function and the instruction located in the
delay slot of the call and the first instruction in the function are both misses. In
this case, this second miss will incur a 17 cycle penalty because the entire program
line containing the delay slot instruction must be completely loaded first. All data
cache accesses are assumed to be hits in both the timing analyzer and the simulator
for these experiments.
Table
5. Results for the Test Programs
Hit Observed Estimated Ratio with Ratio without
Name Ratio Cycles Cycles w-a-f Analysis w-a-f Analysis
Des 86.16% 154,791 171,929 1.11 1.38
Matcnta 81.86% 2,158,038 2,161,172 1.00 1.12
Matmula 98.93% 4,544,944 4,547,299 1.00 1.01
Matsuma 93.99% 1,131,964 1,132,178 1.00 1.13
Sorta 76.06% 14,371,854 30,714,661 2.14 2.52
average 87.58% N/A N/A 1.21 1.38
The WCET of these programs when wrap-around-fill analysis is enabled is significantly
tighter than when wrap-around fill is not considered. Des and Sorta has
overestimations for the same reasons as described in Section 3.5. The small overestimations
in the remaining programs primarily result from the timing analyzer's
conservative approach to first miss-to-first miss categorization transitions. These
slight overestimations also occurred when the timing analysis assumed a constant
miss penalty [11]. Because this situation occurs infrequently, this approach resulted
in only small overestimations. The overhead of executing the timing analysis was
quite small even with wrap-around-fill analysis. The average time required to produce
the WCET of the programs in Table 5 was only 1.27 seconds.
5. Future Work
There are several areas of timing analysis that can be further investigated. More
hardware features, such as write buffers and branch target buffers, could be modeled
in the timing analysis. Best case timing bounds for various types of caches and
other hardware features may also be investigated. An eventual goal of this research
is to integrate the timing analysis of both instruction and data caches to obtain
timing predictions for a complete machine. Actual machine measurements using a
logic analyzer could then be used to gauge the accuracy of our simulator and the
effectiveness of the entire timing analysis environment.
6. Conclusion
There are two general contributions of this paper. First, an approach for bounding
the worst-case data caching performance is presented. It uses data flow analysis
within a compiler to determine a bounded range of relative addresses for each
data reference. An address calculator converts these relative ranges to virtual
address ranges by examining the order of data declarations and the call graph
of the program. Categorizations of the data references are produced by a static
simulator. A timing analyzer uses the categorizations when performing pipeline
path analysis to predict the worst-case performance for each loop and function in
the program. The results so far indicate that the approach is valid and can result
in significantly tighter worst-case performance predictions.
Second, a technique for WCET prediction for wrap-around-fill caches is presented.
When processing a path of instructions, the timing analyzer computes when each
instruction in the entire program line will be loaded into cache based on instruction
categorizations that indicate which instruction fetches could result in cache
misses. The timing analyzer uses this information to determine how much delay,
if any, a fetched instruction will suffer due to wrap-around fill. When analyzing
larger program constructs such as loops or function instances, the wrap-around-fill
information associated with each path is used to detect delays beyond the scope
of a single path. The results indicate that WCET bounds are significantly tighter
than when the timing analyzer conservatively assumes a constant miss penalty.
this paper contributes a comprehensive report on methods and results of
worst-case timing analysis for data caches and wrap-around caches. The approach
taken is unique and provides a considerable step toward realistic worst-case execution
time prediction of contemporary architectures and its use in schedulability
analysis for hard real-time systems.
Acknowledgments
The authors thank the anonymous referees for their comments that helped improve
the quality of this paper and Robert Arnold for providing the timing analysis platform
for this research. The research on which this article is based was supported
in part by the Office of Naval Research under contract number N00014-94-1-006
and the National Science Foundation under cooperative agreement number HRD-
9707076.
Notes
1. A basic loop induction variable only has assignments of the form v := v \Sigma c, where v is a
variable or register and c is an integer constant. Non-basic induction variables are also only
incremented or decremented by a constant value on each loop iteration, but get their values
either directly or indirectly from basic induction variables. A variety of forms of assignment
for non-basic induction variables are allowed. Loop invariant values do not change during the
execution of a loop. A discussion of how induction variables and loop invariant values are
identified can be found elsewhere [1].
2. Annulled branches on the SPARC do not actually access memory (or update registers) for
instructions in the delay slot when the branch is not taken. This simple feature causes a host
of complications when a load or a store is in the annulled delay slot. However, our approach
does correctly handle any such data reference.
3. This earlier computation and expansion of the initial value string of an induction register
proceeds in basically the same manner as has already been discussed, except that loop invariant
registers are expanded as well.
4. On the MicroSPARC I, a st instruction is required to spend two cycles in the MEM stage.
--R
Bounding worst-case instruction cache performance
Fixed priority pre-emptive scheduling: An historical perspective
A portable global optimizer and linker.
Programming in vienna fortran.
High performance fortran without templates.
Extending hpf for advanced data parallel applica- tions
A design environment for addressing architecture and compiler interactions.
A retargetable technique for predicting execution time.
Bounding pipeline and instruction cache performance.
Integrating the timing analysis of pipelining and instruction caching.
Computer Architecture: A Quantitative Approach.
Worst case timing analysis of RISC processors: R3000/R3010 case study.
Efficient worst case timing analysis of data caching.
Efficient microarchitecture modeling and path analysis for real-time software
Cache modeling for real-time software: Beyond direct mapped instruction caches
An accurate worst case timing analysis for RISC processors.
Scheduling algorithms for multiprogramming in a hard- real-time environment
Static Cache Simulation and its Applications.
Predicting program execution times by analyzing static and dynamic program paths.
Parallel Programming and Compilers.
Zeitanalyse von Echtzeitprogrammen.
Calculating the maximum execution time of real-time programs
Static analysis of cache analysis for real-time programming
Integrated SPARC Processor
Bounding Worst-Case Data Cache Performance
Timing analysis for data caches and set-associative caches
Optimizing Supercompilers for Supercomputers.
High Performance Compilers for Parallel Computing.
Pipelined processors and worst case execution times.
--TR
--CTR
Wegener , Frank Mueller, A Comparison of Static Analysis and Evolutionary Testing for the Verification of Timing Constraints, Real-Time Systems, v.21 n.3, p.241-268, November 2001
Wankang Zhao , William Kreahling , David Whalley , Christopher Healy , Frank Mueller, Improving WCET by applying worst-case path optimizations, Real-Time Systems, v.34 n.2, p.129-152, October 2006
Kiran Seth , Aravindh Anantaraman , Frank Mueller , Eric Rotenberg, FAST: Frequency-aware static timing analysis, ACM Transactions on Embedded Computing Systems (TECS), v.5 n.1, p.200-224, February 2006
Kaustubh Patil , Kiran Seth , Frank Mueller, Compositional static instruction cache simulation, ACM SIGPLAN Notices, v.39 n.7, July 2004
Yudong Tan , Vincent Mooney, Timing analysis for preemptive multitasking real-time systems with caches, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.1, February 2007
Wankang Zhao , David Whalley , Christopher Healy , Frank Mueller, Improving WCET by applying a WC code-positioning optimization, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.4, p.335-365, December 2005
Aravindh Anantaraman , Kiran Seth , Kaustubh Patil , Eric Rotenberg , Frank Mueller, Virtual simple architecture (VISA): exceeding the complexity limit in safe real-time systems, ACM SIGARCH Computer Architecture News, v.31 n.2, May | wrap-around fill cache;worst-case execution time;data cache;timing analysis |
339206 | A Comparison of Graphical Techniques for Asymmetric Decision Problems. | We compare four graphical techniques for representation and solution of asymmetric decision problems-decision trees, influence diagrams, valuation networks, and sequential decision diagrams. We solve a modified version of Covaliu and Oliver's Reactor problem using each of the four techniques. For each technique, we highlight the strengths, weaknesses, and some open issues that perhaps can be resolved with further research. | Introduction
This paper compares four graphical techniques for representing and solving asymmetric decision
problems-traditional decision trees (DTs), Smith, Holtzman and Matheson's (SHM) [1993] influence
diagrams (IDs), Shenoy's [1993b, 1996] valuation networks (VNs), and Covaliu and
Oliver's [1995] sequential decision diagrams (SDDs).
We focus our attention on techniques designed for asymmetric decision problems. In a decision
tree, a path from the root to a leaf node is called a scenario. We say a decision problem is
asymmetric if the number of scenarios in a decision tree representation is less than the cardinality
of the Cartesian product of the state spaces of all chance and decision variables.
Each technique has a distinct way of encoding asymmetry. DTs encode asymmetry through
the use of scenarios. IDs encode asymmetry using graphical structures called "distribution trees."
VNs encode asymmetry using functions called "indicator valuations." Finally, SDDs encode
asymmetry by showing all scenarios in a compact fashion using graphs called "sequential deci-
Bielza and Shenoy 2
sion diagrams."
The main contribution of this paper is to highlight the strengths, weaknesses, and some open
issues that perhaps can be resolved with further study of the four techniques. By strengths and
weaknesses, we mean intrinsic features we find desirable and undesirable, respectively.
An outline of the remainder of the paper is as follows. In Section 2, we give a complete
statement of a modified version of the Reactor problem [Covaliu and Oliver 1995], describe a
DT representation and solution of it, and discuss strengths, weaknesses, and open issues associated
with DTs. In Section 3, we represent and solve the same problem using Smith, Holtzman
and Matheson's IDs, and discuss strengths, weaknesses, and open issues associated with IDs. In
Section 4, we do the same using Shenoy's VNs. In Section 5, we focus on Covaliu and Oliver's
SDDs. Finally, in Section 6, we summarize our conclusions.
In this section, we describe a DT representation and solution of a small asymmetric decision
problem called the Reactor problem, and we discuss strengths, weaknesses, and some open issues
associated with DTs. The Reactor problem is a modified version of the problem described
by Covaliu and Oliver [1995]. In our version, Bayesian revision of probabilities is required during
the solution process, and the joint utility function decomposes into only three factors.
2.1 A Statement of the Reactor Problem
An electric utility firm must decide whether to build (D 2 ) a reactor of advanced design (a), a reactor
of conventional design (c), or neither (n). If successful, an advanced reactor is more profit-
able, but is riskier. Based on past experience, a conventional reactor (C) has probability 0.980 of
no failure (cs), and a probability 0.020 of a failure (cf). On the other hand, an advanced reactor
(A) has probability 0.660 of no failure (as), probability 0.244 of a limited accident (al), and
probability 0.096 of a major accident (am). The profits for the case the firm builds a conventional
reactor are $8B if there is no failure, and -$4B if there is a failure. The profits for the case the
firm builds an advanced reactor are $12B if there is no failure, -$6B if there is a limited accident,
A Comparison of Graphical Techniques for Asymmetric Decision Problems 3
and -$10B if there is a major accident. The firm's utility function is a linear function of the
profits.
Before making this decision, the firm can conduct an expensive test of the components of the
advanced reactor. The test results (T) can be classified as bad (b), good (g), excellent (e) or no
result (nr). The cost of this test is $1B. If the test is done, its results are correlated with the success
or failure of the advanced reactor. The likelihoods for the test results are as follows: P(g | as)
P(g | If the test results are bad, the Nuclear Regulatory
Commission will not permit an advanced reactor. The firm needs to decide (D 1 ) whether to conduct
the test (t), or not (nt). If the decision is nt, the test outcome is nr.
2.2 DT Representation and Solution
Figure
2.1 shows a decision tree representation and solution of this problem. The order in which
the nodes are traversed from left to right is the chronological order in which decisions are made
and/or outcomes of chance events are revealed to the decision-maker, and every available branch
at every node is explicitly shown. Thus, the decision tree gives a chronological and fully detailed
view of the structure of the decision problem.
Notice that even before the decision tree can be completely specified, the conditional probabilities
required by the decision tree representation have to be computed from those specified in
the problem using the standard procedure called preprocessing. In this procedure, given the priors
and the likelihoods, first we compute the joints, then the preposteriors, and finally the poste-
riors. Details of the results of the procedure for the Reactor problem can be found in Bielza and
Shenoy [1998].
Figure
2.1 also shows the solution of the Reactor problem using rollback. The optimal strategy
is to do the test; build a conventional reactor if the test results are bad or good, and build an
advanced reactor if the test results are excellent. The expected profit associated with this strategy
is $8.130B.
Bielza and Shenoy 4
Figure
2.1. A decision tree representation and solution of the Reactor problem.
Profit, B$
A
A
A
-10cs, .980
cf, .020
as, .660
al, .244
am, .096
as, .400
al, .460
am, .140
as, .900
al, .060
am, .040
c
a
a
c
g, .300
e, .600
t,
cc
a
2.3 Strengths of DTs
DTs are easy to understand and easy to solve. DTs encode asymmetries through use of scenarios
without introducing dummy states for variables. If a variable is not relevant in a scenario, a DT
simply does not include it. As we will see shortly, IDs and VNs introduce dummy states for
chance and decision variables in the process of encoding asymmetry. This decreases their computational
efficiency. Like DTs, SDDs do not introduce dummy states for variables.
2.4 Weaknesses of DTs
DTs capture asymmetries globally in the form of scenarios. This contributes to the exponential
growth of the decision tree representation and limits the use of DTs to small problems. In com-
parison, IDs, VNs and SDDs capture asymmetries locally.
A Comparison of Graphical Techniques for Asymmetric Decision Problems 5
Although we have shown the decision tree representation using coalescence 1 [Olmsted 1983],
it should be noted that automating coalescence in decision trees is not easy since it involves constructing
the complete uncoalesced tree and then recognizing repeated subtrees. This is a major
drawback of DTs (as compared to IDs, VNs, and SDDs), and it limits the use of DT representation
to small decision problems.
2.5 Some Open Issues
To complete a DT representation of a problem, the probability model may need preprocessing,
and this makes the automation of DTs difficult. One method for avoiding preprocessing is to use
von Neumann-Morgenstern [1944] information sets to encode information constraints-see Shenoy
[1995, 1998] for details. However, adding information sets makes the resulting representation
more complex.
conditional independence is not explicitly encoded in probability trees, doing the pre-processing
by computing the joint probability distribution for all chance variables is computationally
intractable in problems with many chance variables. This issue can be resolved by using
a Bayesian network representation of the probability model, and Olmsted's [1983] and
Shachter's [1986] arc-reversal method can then be used to compute the probability model demanded
by the DT representation. However, this raises the issue of determining a sequence of
arc reversals so as to achieve the desired probability model with minimum computation.
3 Asymmetric Influence Diagrams
In this section, we will represent and solve the Reactor problem using Smith, Holtzman and
Matheson's [1993] (henceforth, SHM) asymmetric influence diagram technique. Although SHM
describe their technique for a single undecomposed utility function, we use Tatman and
Shachter's [1990] extension of the ID technique that allows for a decomposition of the joint utility
function. The symmetric ID technique was initially developed by Howard and Matheson
1 When a DT has repeating subtrees, they are shown just once and are pointed to by all scenarios in which they oc-
cur. This is referred to as coalescence.
Bielza and Shenoy 6
[1981], Olmsted [1983], and Shachter [1986]. Modifications of the symmetric ID solution technique
have been proposed by Smith [1989], Shachter and Peot [1992], Ndilikilikesha [1994],
Jensen et al. [1994], Cowell [1994], Zhang et al. [1994], Goutis [1995], and others. Besides
SHM, asymmetric extensions of the influence diagram technique have been proposed by, e.g.,
Call and Miller [1990], Fung and Shachter [1990], and Qi et al. [1994].
3.1 ID Representation
An influence diagram representation of a problem is specified at three levels-graphical, func-
tional, and numerical. At the graphical level, we have a directed acyclic graph, called an influence
diagram, that displays decision variables, chance variables, factorization of the joint probability
distribution into conditionals, factorization of the joint utility function, and information
constraints. Figure 3.1 shows an influence diagram for the Reactor problem at the graphical
level.
Figure
3.1. An ID for the Reactor problem at the graphical level.
A
Note that the arcs pointing to chance nodes reflect the way in which their joint probability
distribution is currently factored, which is not necessarily the chronological order in which their
outcomes will be revealed to the decision maker. Arcs between pairs of chance nodes may be
reversed by changing the way in which the joint distribution is factored, as in applications of
Bayes' theorem. Also, note that the ID avoids the combinatorial explosion of the decision tree
(the so-called "bushy mess" phenomenon) by suppressing details of the number of branches
A Comparison of Graphical Techniques for Asymmetric Decision Problems 7
available at each decision or chance node. The latter information is encoded deeper down at the
functional level instead.
At the functional level, we specify the structure of the conditional distribution (or simply,
conditional) for each node (except super value nodes) in the ID, and at the numerical level, we
specify the numerical details of the probability distributions and the utilities. The key idea of the
SHM technique is a new tree representation for describing the conditionals. These are called
distribution trees with paths showing the conditioning scenarios that lead to atomic distributions
that describe either probability distributions, set of alternatives, or (expected) utilities, assigned
in each conditioning scenario. A conditional for a chance node represents a factor of the joint
probability distribution. A conditional for a decision node can be thought of as describing the
alternatives available to the decision-maker in each conditioning scenario. A conditional for a
value node represents a factor of the joint utility function. For the Reactor problem, some of the
conditionals are shown in Figure 3.2 (the complete set is found in Bielza and Shenoy [1998]).
The distribution tree for D 2 has two atomic distributions. The firm will choose among three
alternatives (conventional or advanced reactor or neither) only if it decides to not do the test (D 1
or if it conducts the test and its result is good or excellent. The conditional for D 2 is coa-
lesced, i.e., the atomic distribution with three alternatives is shared by three distinct scenarios,
and is clipped, i.e., many branches in conditioning scenarios are omitted because the corresponding
conditioning scenarios are impossible. For example, if the firm chooses to not do the
test, then it is impossible to observe any test results.
The distribution tree for T shows that if the firm decides to not perform the test (D
then regardless of the advanced reactor state. Thus, the conditional for
T can be collapsed across A given D nt. Collapsed scenarios are shown by indicating the set
of possible states on a single edge emanating from the node. They allow the representation of
conditional independence between variables that holds only given particular outcomes of some
other variables. Deterministic atomic distributions for chance and decision variables are shown
by double-bordered nodes.
Bielza and Shenoy 8
Figure
3.2. Distribution trees for decision node D 2 , chance node T, and utility node u 1 .
A
nt
A T
g,
e,
g,
e, .147
g, .437
e, .250
nr
as
al
am
{as, al, am}
cs
cf
a
c
{cs, cf}
e
nt
nr
c
c
a
Although not all are shown in Figure 3.2, the conditionals for the three utility nodes provide
other examples of coalesced, clipped, and collapsed distributions. They are deterministic nodes
because we assign a single utility for each conditioning scenario. Since utility functions are always
deterministic, and we use diamond-shaped nodes to indicate utility functions, we do not
draw these nodes with a double border.
Another feature of distribution trees not illustrated in the Reactor problem is unspecified distributions
where certain atomic distributions of a chance node are left unspecified since they are
not required during the solution phase. If only the probabilities are unspecified, then we have a
partially unspecified distribution. All of these features-coalesced, clipped, collapsed, and unspecified
distributions-provide a more compact and expressive representation than the usual
table in the symmetric ID literature.
3.2 ID Solution
The algorithm for solving an asymmetric ID is conceptually the same as that for conventional ID.
However, SHM describe methods for exploiting different features of a distribution tree (such as
clipped scenarios, coalescence, collapsed scenarios, etc.) to simplify the computations.
A Comparison of Graphical Techniques for Asymmetric Decision Problems 9
We solve an ID by reducing variables in a sequence that respects the information constraints.
If the true state of a chance variable C is not known at the time the decision maker must choose
an alternative from the atomic distribution of decision variable D, then C must be reduced before
D, and vice versa. In the Reactor problem, there are two possible reduction sequences, CAD 2 TD 1
and ACD 2 TD 1 . Both of these reduction sequences require the same computational effort. In the
following, we use the first reduction sequence CAD 2 TD 1 . We use this sequence also when we
solve this problem with the VN and the SDD techniques.
We start by reducing node C. Essentially, we absorb the conditional for C into utility function
using the expectation operation (following Theorem 5 in Tatman and Shachter [1990]).
The expectation operation is carried out by considering each conditioning scenario separately.
Figure
3.3 shows the ID after reducing C. Next, we reduce A. To do so, we first reverse arc (A,
T), and then absorb the posterior for A into utility function u 2 using the expectation operation.
Figure
3.3 shows the ID after reduction of A.
Next, we need to reduce D 2 . Since D 2 has two value node successors, before we reduce D 2 ,
we introduce a new super-value node w, and then we merge u 1 and u 2 into w (as per Theorem 5
in Tatman and Shachter [1990]). We reduce D 2 by maximizing w over the states of D 2 permitted
by the distribution tree for D 2 . Notice that this distribution tree (shown in Figure 3.2) has asymmetry
in the atomic alternative sets, but this is not exploited either during the reduction of A or
during the processing prior to reduction of D 2 . We will comment further on this aspect of SHM
technique in Subsection 3.5. Figure 3.3 shows the ID after reduction of D 2 .
Next, we reduce T. Notice that w is the only value node that has T in its domain. We absorb
the conditional for T into the utility function w using the expectation operation. Figure 3.3 shows
the resulting ID. Finally, we need to reduce D 1 . Since D 1 is in the domain of u 3 and w, first we
combine u 3 and w obtaining u, and then we reduce D 1 by maximizing u over the possible states
of D 1 . This completes the solution of the Reactor ID representation. Complete details of the ID
solution of the Reactor problem are found in Bielza and Shenoy [1998].
Bielza and Shenoy 10
Figure
3.3. ID solution at the graphical level.
A
1. After reducing C
2. After reducing A
3. After reducing D 2
4. After reducing T
5. After reducing D 1
A
Initial ID
An optimal strategy can be pieced together from the optimal decision function for D 1 (ob-
tained during reduction of D 1 ) and the optimal decision function for D 2 (obtained during reduction
of D 2 ). Of course, we get the same optimal strategy and the same maximum expected utility
as in the DT case.
3.3 Strengths of IDs
The main strength of IDs is their compactness. The size of an ID graphical representation grows
linearly with the number of variables. Also, they are intuitive to understand, and they encode
conditional independence relations in the probability model.
The asymmetric extension of IDs captures asymmetry through the notion of distribution
trees. These are easy to understand and specify. The sharing of scenarios, clipping of scenarios,
A Comparison of Graphical Techniques for Asymmetric Decision Problems 11
collapsed scenarios, and unspecified-distribution features of distribution trees contribute to the
expressiveness of the representation and to the efficiency of the solution technique.
In distribution trees one can mix the different kinds of atomic distributions-not only can one
mix deterministic and stochastic atomic distributions for chance nodes, one can mix stochastic
atomic distributions and atomic alternative sets for decision nodes. This feature may be useful if
the decision-maker's ability to decide is determined by some previous decision or uncertainty
[SHM 1993, p. 287].
The ID technique can detect the presence of unnecessary information in a problem by identifying
irrelevant or barren nodes [Shachter 1988]. This leads to a simplification of the original
model and to a corresponding decrease in the computational burden of solving it.
3.4 Weaknesses of IDs
The ID technique is most suitable for problems in which we have a conditional probability model
(also called a Bayesian network model) of the uncertainties. This is typical of problems in which
the modeling of probabilities is done by a human expert. However, for problems in which a
probability model is induced from data, the corresponding probability model is not always a
Bayesian network model. In this case, the use of ID technique is problematic, i.e., it may require
extensive and unnecessary preprocessing to complete an ID representation [Shenoy 1994a].
3.5 Some Open Issues
The asymmetric ID graphical representation does not distinguish between pure informational
arcs and conditioning arcs for decision variables. Thus, we cannot predict the domain of the conditional
for decision variables from the graphical representation alone. One of the most attractive
aspects of graphical models (Bayes nets, symmetric IDs, VNs, etc.) is that one can determine
domains of functions directly from the graphical level description. This is the essence of encoding
conditional independence by graphs. This aspect of asymmetric IDs can be easily resolved by
having two kinds of arcs that lead to decision variables. One kind can be interpreted as conditional
as well as informational, and the other can be interpreted as purely informational.
Our major concerns with the SHM representation center around having information about
Bielza and Shenoy 12
asymmetries in the problem stored in many distribution trees in the diagram. For example, in the
Reactor problem, consider the distribution trees for T and D 2 (shown in Figure 3.2). Notice that,
e.g., in the distribution tree for T, if D probability 1. The clipping of
T in the distribution tree for D 2 describes the same information. This repetition raises questions
about the consistency and efficiency of the representation and the solution technique. First, the
redundant specification of information may be inefficient when assessing these distributions; the
user may have to repeatedly clip or collapse these same scenarios for many distributions. Second,
if the user fails to clip or collapse scenarios in some distribution trees, (s)he may do unnecessary
calculations for these scenarios when solving the influence diagram. Third, even if the user represents
all clipped/collapsed scenarios in all distribution trees, there is still the possibility of
some unnecessary computation since the solution algorithm does not have access to all asymmetric
information at all times. For example, in the Reactor problem, consider the situation immediately
after reduction of node A shown in Figure 3.3. u 2 has just inherited two new predecessors T
and D 1 , and its distribution tree has some conditioning scenarios that are not possible such as D 1
a. The absence of this scenario is encoded in the distribution tree for D 2 (see
Figure
3.2), but we do not use this information until it is time to reduce D 2 . Finally, the redundant
specification creates a need for consistency: we need to somehow ensure that scenarios that
are in fact possible are not inadvertently clipped in one of the distribution trees. Similarly, if we
use the "unspecified distribution" feature, we need to be sure that we really do not need that distribution
to answer particular questions. We can always check the consistency of an influence
diagram by attempting to solve it (perhaps not carrying out the numeric calculations) and seeing
if any required information has been clipped or left unspecified. It would be nice, however, if
there were some simpler tests (perhaps along the lines of those used in VNs, Shenoy 1993b) that
could determine whether the ID is sufficiently defined to answer specific questions.
The efficiency concerns can be at least partially addressed by "propagating" clipped scenarios
during the assessment phase, as suggested by SHM (p. 288, top of right column). For exam-
ple, if the user has specified the alternatives for D 1 and the distribution for T prior to specifying
A Comparison of Graphical Techniques for Asymmetric Decision Problems 13
the distribution for D 2 , then the system can figure out which combinations of D 1 and T are impossible
(e.g., the combination D automatically clip the corresponding
branches in the distribution tree for D 2 . By propagating clipping in this way, we save
the user the trouble of repeatedly doing this, thereby making the user more efficient and less
likely to specify scenarios that should be clipped and less likely to clip scenarios that are possi-
ble. To propagate clipping in this way, the user must define distribution trees in a sequence consistent
with the partial order defined by the influence diagram. Note, however, that this propagation
does not make use of the numeric probabilities in the representation; it depends only on the
specification of possible and impossible events. This propagation process is somewhat similar to
the calculation of "effective state spaces" in VNs, as described in Shenoy [1993b].
4 Asymmetric Valuation Networks
In this section, we will represent and solve the Reactor problem using Shenoy's [1993b, 1996]
asymmetric valuation network technique. The symmetric VN technique is described in [Shenoy
1993a] for the case of a single undecomposed utility function, and in [Shenoy 1992] for the case
of an additive decomposition of the joint utility function.
4.1 VN Representation
A valuation network representation is specified at three levels-graphical, dependence, and nu-
merical. The graphical and dependence levels refer to qualitative (or symbolic) knowledge,
whereas the numerical level refers to quantitative knowledge.
At the graphical level, we have a graph called a valuation network. Figure 4.1 shows a
valuation network for the Reactor problem. A valuation network consists of two types of
nodes-variable and valuation. Variables are further classified as either decision or chance, and
valuations are further classified as indicator, probability, or utility. Thus, in all there are five different
types of nodes-decision, chance, indicator, probability, and utility.
Decision nodes correspond to decision variables and are depicted by rectangles. Chance
nodes correspond to chance variables and are depicted by circles. This part of VNs is similar to
IDs.
Bielza and Shenoy 14
Figure
4.1. A valuation network for the Reactor problem.
a
Indicator valuations represent qualitative constraints on the joint state spaces of decision and
chance variables and are depicted by double-triangular nodes. The set of variables directly connected
to an indicator valuation by undirected edges constitutes the domain of the indicator
valuation. In the Reactor problem, there are two indicator valuations labeled d 2 and t 2 . d 2 's domain
is {D 1 , T, D 2 } and it represents the constraints that the test results are available only in the
case we decide to do the test, and that the alternatives at D 2 depend on the choices at D 1 and the
test results T. t 2 's domain is {T, A} and it represents the constraint that if A = as, then
not possible.
Utility valuations represent additive factors of the joint utility function and are depicted by
diamond-shaped nodes. The set of variables directly connected to a utility valuation constitutes
the domain of the utility valuation. In the Reactor problem, there are three additive utility valuations
labeled domains {D 2 , C}, {D 2 , A}, and {D 1 }, respectively.
Probability valuations represent multiplicative factors of the family of joint probability distributions
for the chance variables in the problem, and are depicted by triangular nodes. Thus, in
a VN, information about the current factorization of the joint probability distribution of chance
variables is carried by the additional probability valuation nodes, rather than by the directions of
arcs pointing to chance nodes. The set of all variables directly connected to a probability valuation
constitutes the domain of the probability valuation. In the Reactor problem, there are three
A Comparison of Graphical Techniques for Asymmetric Decision Problems 15
probability valuations labeled t 1 , a, and c, with domains {A, T}, {A}, and {C}, respectively.
The specification of the valuation network at the graphical level includes directed arcs between
pairs of distinct variables. These directed arcs represent information constraints. Suppose
R is a chance variable and D is a decision variable. An arc (R, D) means that the true state of R is
known to the decision maker (DM) at the time the DM has to choose an alternative from D's
state space (as in an ID). Conversely, an arc (D, R) means that the true state of R is not known to
the DM at the time the DM has to choose an alternative from D's state space.
Next, we specify a valuation network representation at the dependence level. At this level,
we specify the state spaces of all variables and we specify the details of the indicator valuations.
Associated with each variable X is a state space 0X . As in the cases of IDs and SDDs, we
assume that all variables have finite state spaces. Suppose s is a subset of variables. An indicator
valuation for s is a function i: 0 s {0, 1}. An efficient way of representing an indicator valuation
is simply to describe the elements of the state space that have value 1, i.e., we represent i by
To minimize jargon, we also call W i an
indicator valuation for s. In the Reactor problem, the details of the two indicator valuations are as
follows: W d 2
{(nt, nr, n), (nt, nr, c), (nt, nr, a), (t, b, n), (t, b, c), (t, g, n), (t, g, c), (t, g, a), (t, e,
n), (t, e, c), (t, e, a)}; W t 2
{(as, nr), (as, g), (as, e), (al, nr), (al, b), (al, g), (al, e), (am, nr),
(am, b), (am, g), (am, e)}. Notice that the indicator valuation W d 2
is identical to the scenarios in
the distribution tree for D 2 depicted in Figure 3.2. The indicator valuation W t 2
rules out the scenario
Before we can specify the valuation network at the numerical level, it is necessary to introduce
the notion of effective state spaces for subsets of variables. Suppose that each variable is in
the domain of some indicator valuation. (If not, we can create "vacuous" indicator valuations that
are identically one for every state of such variables.) We define combination of indicator valuations
as pointwise Boolean multiplication, and marginalization of an indicator valuation as Boolean
addition over the state space of reduced variables. Then, the effective state space for a subset
s of variables, denoted by W s , is defined as follows: First we combine all indicator valuations that
Bielza and Shenoy
include some variable from s in their domains, and next we marginalize the combination so that
only the variables in s remain in the marginal. Shenoy [1994b] has shown that these definitions
of combination and marginalization satisfy the three axioms that permit local computation [She-
noy and Shafer 1990]. Thus, the computation of the effective state spaces can be done efficiently
using local computation. For example, to compute the effective state space for {A, T}, by definition
denotes combination of valuations d 2 and t 2 , and
denotes marginalization of the joint valuation d 2 ft 2 down to the domain {A, T}).
However, using local computation, it can be computed more efficiently as follows: W {A,
. Details of local computation are found in Shenoy [1993b].
Finally, we specify a valuation network at the numerical level. At this level, we specify the
details of the utility and probability valuations. A utility valuation u for s is a function u: W s R,
where R is the set of real numbers. The values of u are utilities. In the Reactor problem, there are
three utility valuations. One of these is shown in Table 4.1, and we refer the reader to Bielza and
Shenoy [1998] for the complete description.
Table
4.1. Utility valuation u 1 and probability valuation t 1 in the Reactor problem.
a cs 0 as e .818
a cf 0 al nr 1
c cs 8 al b .288
c cf -4 al g .565
al e .147
am nr 1
am b .313
am g .437
am e .250
A probability valuation p for s is a function p: W s [0, 1]. The values of p are probabilities.
In the Reactor problem, there are three probability valuations. One of these is shown in Table
4.1. What do these probability valuations mean? c is the marginal for C, a is the marginal for A,
A Comparison of Graphical Techniques for Asymmetric Decision Problems 17
and d 2
is the conditional for T given A and D 1 . Thus the conditional for T factors
into three valuations such that t 1 has the numeric information, and d 2 and t 2 include the structural
information.
Notice that the utility and probability valuations are described only for effective state spaces
that are computed using local computation from the specifications of the indicator valuations.
There is no redundancy in the representation. However, in u 1 , unlike the ID representation, the
irrelevance of C in scenarios D or a is not represented in the VN representation because we
are unable to do so. Also, in u 2 , the irrelevance of A in scenarios where D or c is not repre-
sented. We will comment further on these issues in Subsection 4.5.
4.2 VN Solution
In this section, first we sketch the fusion algorithm for solving valuation network representations
of decision problems, and then we solve the Reactor problem.
The fusion algorithm is essentially the same as in the symmetric case [Shenoy 1992]. The
main difference is in how indicator valuations are handled. Since indicator valuations are identically
one on effective state spaces, there are no numeric computations involved in combining
indicator valuations. Indicator valuations do contribute domain information and cannot be totally
ignored. In the fusion algorithm, we reduce a variable by doing a fusion operation on the set of
all valuations (utility, probability, and indicator) with respect to the variable. All numeric computations
are done on effective state spaces only. This means that the effective state spaces may
need to be computed prior to doing the fusion operation if the effective state space has not been
already computed during the representation phase.
Fusion with respect to a decision variable D is defined as follows. The utility, probability,
and indicator valuations whose domains do not include D remain unchanged. All utility valuations
that include D in their domain are combined together, and the resulting utility valuation u is
marginalized such that D is eliminated from its domain. A new indicator valuation z D corresponding
to the decision function for D is created. All probability and indicator valuations that
include D in their domain are combined together and the resulting probability valuation r is
Bielza and Shenoy
combined with z D and the result is marginalized so that D is eliminated from its domain.
Fusion with respect to a chance variable C is defined as follows. The utility, probability, and
indicator valuations whose domains do not include C remain unchanged. A new probability
valuation, say r, is created by combining all probability and indicator valuations whose domain
include C and marginalizing C out of the combination. Finally, we combine all probability and
indicator valuations whose domains include C, divide the resulting probability valuation by the
new probability valuation r that was created, combine the resulting probability valuation with
the utility valuations whose domains include C, and finally marginalize the resulting utility
valuation such that C is eliminated from its domain. In some special cases-such as if r is identically
one, or if C is the only chance variable left-we can avoid creating a new probability
valuation and the corresponding division.
The solution of the Reactor problem using the fusion algorithm is as follows.
Fusion with respect to C. First we fuse valuations in {d 2 ,
to C. Since c is identically one,
Fus
Let u 4 denote (u 1 fc) D 2 . The details of the numerical computation involved are shown in Table
4.2. The result of fusion with respect to C is shown graphically in Figure 4.2.
Fusion with respect to A. Next, we fuse the valuations in {d 2 ,
to A. Fus A {d 2 ,
. The result of fusion with respect to A is shown graphically in Figure 4.2. Notice
that all computations are done on effective state spaces, and so we need to compute the effective
state space of {T, D 2 , A} prior to doing the fusion since it has not been already computed during
the representation stage (see Bielza and Shenoy [1998] for details).
A Comparison of Graphical Techniques for Asymmetric Decision Problems 19
Table
4.2. Details of fusion with respect to C.
c cs 8 0.98 7.840 7.760
c cf -4 0.02 -0.080
a cs 0 0.98 0 0
a cf 0
Figure
4.2. Fusion in VNs at the graphical level.
a
c
Initial VN
a
1. After fusion wrt C
2. After fusion wrt A 3. After fusion wrt D 2
4. After fusion wrt T
5. After fusion wrt D 1
Fusion with respect to D 2 . Next we fuse {d 2 , u 3 , u 4 , u 5 , t'} with respect to D 2 . Since D 2 is a
decision variable, Fus D 2
is
the indicator function representation of the decision function for D 2 . Let u 6 denote (u 4 fu 5
and d 2 ' denote (d 2 fz D 2
. The result of fusion with respect to D 2 is shown graphically in
Bielza and Shenoy 20
Figure
4.2.
Fusion with respect to T. Next we fuse {d 2 ', u 3 , u 6 , t'} with respect to T. Since T is the
only chance variable, Fus R {d 2 ', u 3 , u 6 ,
. The result of fusion with respect to T is shown graphically in Figure 4.2.
Fusion with respect to D 1 . Next, we fuse {u 3 , u 7 } with respect to D 1 . Since D 1 is a decision
variable, Fus D 1
. The result of fusion with respect
to D 1 is shown graphically in Figure 4.2.
This completes the fusion algorithm. An optimal strategy can be pieced together from the
decision functions for D 1 and D 2 . The optimal strategy and maximum expected utility are the
same as in the DT and ID case. Complete details of the VN solution of the Reactor problem can
be found in Bielza and Shenoy [1998].
4.3 Strengths of VNs
Like IDs, VNs are compact and they encode conditional independence relations in the probability
model [Shenoy 1994c]. Unlike IDs, the VN technique can represent directly every probabilistic
model, without any preprocessing. All that is required is a factorization of the joint probability
distribution for the chance variables.
The information constraints representation is more flexible in VNs than in IDs. In IDs, all
decision nodes have to be completely ordered. This condition is called "no-forgetting" [Howard
and Matheson 1981]. In VNs, there is a weaker requirement called "perfect recall" [Shenoy
1992]. The perfect recall requirement can be stated as follows. Given any decision variable D
and any chance variable C, it should be clear whether the true state of C is known or unknown
when a choice has to be made at D. The flexibility of information constraints will offer a greater
number of allowable reduction sequences than the other techniques. Of course, the perfect recall
condition can be easily adapted to the ID domain.
The VN representation technique captures asymmetry through the use of indicator valuations
and effective state spaces. Indicator valuations encode structural asymmetry modularly with no
duplication, and the effective state space for a subset of variables contains all structural asym-
A Comparison of Graphical Techniques for Asymmetric Decision Problems 21
metry information that is relevant for that subset. This contributes to the parsimony of the representation
In VNs, the joint probability distribution can be decomposed into functions with smaller domains
than in IDs. This is so because IDs insist on working with conditionals. For example, the
conditional for T has the domain {D 1 , A, T} in the ID (as seen in Figure 3.2), and the valuation
t 1 has the domain {A, T} in the VN (as seen in Table 4.1). The distribution tree for T in the ID
could be computed from the VN as d 2
One implication of this decomposition is
that during the solution phase, the computation is more local, i.e., it involves fewer variables,
than in the case of IDs. For example, in the ID technique, reduction of A involves variables D 1 ,
(as deduced from Figure 3.3), whereas in the VN technique, reduction of A only
involves variables T, D 2 , and A (as deduced from Figure 4.2).
VNs do not perform unnecessary divisions done in DTs, IDs, and in SDDs. In DTs, these un-necessary
divisions are done during the preprocessing stage. In IDs and SDDs, the unnecessary
divisions are done during arc reversal. For symmetric problems, Ndilikilikesha [1994] and Jensen
et al. [1994] have suggested modifications to the ID solution technique to avoid these unnecessary
divisions. These modifications need to be generalized to the asymmetric case. In general,
with arbitrary potentials and an additive decomposition of the utility function, divisions are often
necessary if we want to take advantage of local computation. In this case, VNs, IDs and SDDs
are similar. This is the situation in the Reactor problem, i.e., all divisions done in this problem
are necessary.
Finally, the VN technique includes conditions that tell us when a representation is well defined
for computing an optimal strategy [Shenoy 1993b]. These conditions are useful in automating
the technique.
4.4 Weaknesses of VNs
The modeling of conditionals is not as intuitive in VNs as in IDs. For example, in the Reactor
problem, the probability valuation t 1 is not a true conditional, it is only a factor of the condi-
tional, i.e., d 2
A. This factoring of conditionals
Bielza and Shenoy 22
into valuations with smaller domain makes it difficult to attach semantics for the probability
valuations, and this may make it difficult or non-intuitive to represent.
In VNs, the specification of a decision problem is done sequentially as follows. First, the user
specifies the VN diagram. Next, the user specifies the state spaces of all decision and chance
nodes, and all indicator valuations. Finally, the user specifies the numerical details of each probability
and utility valuations for configurations in the effective state spaces that are computed
using local computation from the indicator valuations. Some users may find this sequencing too
constraining.
VNs show explicitly the probability distributions as nodes, which implies a greater number of
nodes and edges in the diagram and probably more confusion when representing problems with
many variables.
4.5 Some Open Issues
A major issue of VNs is their inability to model some asymmetry. For example, in the Reactor
problem, we are unable to model the irrelevance of node C for the scenarios D or a in the
utility valuation u 1 . This issue perhaps can be resolved by adapting the collapsed scenario feature
of IDs to VNs.
In comparison with IDs, VNs are unable to use sharing of scenarios and collapsed scenario
features of IDs. Consequently, a VN representation may demand more space than a corresponding
ID representation that can take advantage of these features. Also, the inability to use sharing
and collapsed scenarios features has a computational penalty. For example, in the Reactor prob-
lem, reduction of C requires 9 arithmetic operations in VNs as compared to 3 in the case of IDs,
and reduction of A requires 80 operations in VNs as compared to 39 in the case of IDs. This issue
can be perhaps be resolved by adapting the sharing and collapsed scenario features of IDs to
VNs. However, VNs can and do represent clipping of scenarios through the use of effective state
spaces. The elements of an effective state space include the unclipped conditioning scenarios.
Also, VNs can represent partially unspecified distributions by simply not specifying the values
for particular elements of the effective state space. However to avoid the problem of determining
A Comparison of Graphical Techniques for Asymmetric Decision Problems 23
when a representation is completely specified for computation of an optimal strategy, it may be
better to not use this feature of IDs.
5 Sequential Decision Diagrams
In this section, we will represent and solve the Reactor problem using Covaliu and Oliver's
[1995] sequential decision diagram technique. The SDD technique is described either for a
problem in which the utility function is undecomposed, or for a problem in which the utility
function decomposes into additive (or multiplicative) factors such that each factor has only one
variable in its domain. Since our version of the Reactor problem is not in either of these two
categories, first we combine the three utility factors and then we use the undecomposed version
of the SDD technique to represent and solve the Reactor problem.
5.1 SDD Representation
In this technique, a decision problem is modeled at two levels, graphical and numerical. At the
graphical level, we model a decision problem using two directed graphs-an ID to describe the
probability model, and a new diagram, called a sequential decision diagram, which captures the
asymmetric and the information constraints of the problem. Figure 5.1 shows an ID and a SDD
for the Reactor problem.
Figure
5.1. The initial ID and the SDD for the Reactor problem.
A
A
nt
c
At the numerical level, we specify the conditionals for each chance node in the ID, and data
built from both diagrams are organized in a formulation table, similar to the one used by
Kirkwood [1993], in such a way that the recursive algorithm used in the solution process can
easily access the data contained in it.
Bielza and Shenoy 24
A SDD is a directed acyclic graph, with the same set of nodes as in the ID. However, its
paths show all possible scenarios in a compact way, as if it were a schematic decision tree, i.e., a
decision tree in which all branches from a decision or chance node leading to the same generic
successor node are collapsed together. A SDD is said to be proper if (i) there is only one source
node (a node with no arrows pointing to it), (ii) there is only one sink node (a node with no arrows
emanating from it) and it is the value node, and (iii) there is a directed path that contains all
decision nodes.
In the SDD for the Reactor problem, the arc (D 1 , T) with the label t tells us that if we perform
the test (D then we will observe its result or e). Arc (D 1 , D 2 ) with label nt tells us
that we will not observe T when D nt. Arcs (D 2 , u), (D 2 , C) and (D 2 , show that A is relevant
only if D and C is relevant only if D c. The label over the arc (D 2 ,
dependence on realized states at predecessor nodes, i.e., the alternative D a is available only if
b. The six directed paths from D 1 to u in the SDD are a compact representation of the
twenty-one possible scenarios in the decision tree representation (Figure 2.1).
Figure
5.2. The transformed ID.
A
Notice that the partial order implied by the arrows in an ID may be different from the partial
order implied by the arrows in a corresponding SDD. Let < D and < I denote the partial orders in
SDD and ID respectively. If C is a chance node, D is a decision node, and C < I D implies
C < D D, then we say the ID and SDD are compatible [Covaliu and Oliver 1995]. In Figure 5.1,
e.g., we have A < I D 2 (since there is a directed path from A to D 2 in the ID), and D 2 < D A (since
A Comparison of Graphical Techniques for Asymmetric Decision Problems 25
there is an arrow from D 2 to A in the SDD). Therefore the two diagrams are incompatible. The
next step in completing the SDD representation is to transform the ID so that it is compatible
with the SDD. In the Reactor problem, we must reverse the arc (A, T) in the ID to make the ID
compatible with the SDD. The transformed ID is shown in Figure 5.2.
Next, we organize data in the formulation table, which contains the complete information the
solution algorithm will require. Table 5.1 is the formulation table for the Reactor problem. Not
all details of the utility function u are shown here-see Bielza and Shenoy [1998] for details.
Table
5.1. A formulation table for the Reactor problem.
Node
Name
Node
Type
Standard Histories
(Minimal in bold)
State
Space
Probability
Distribution
Next-Node
Function
nt t D 2 T
nt - n c a u C A
A chance D 1 T D 2
nt - a as al am .660 .244 .096 u u u
t g a .400 .460 .140
t e a .900 .060 .040
nt - c cs cf .98 .02 u u
nt
nt - c - cs 8
t e a am -11
The formulation table has a row for each node in the SDD. If X < D Y, then the row for X
precedes the row for Y. Each row includes node name, node type, standard histories and minimal
histories, state space, conditional distribution (for chance nodes only), and next-node function. It
should be noted that the formulation table is part of the representation of the decision problem.
Bielza and Shenoy 26
The term history refers to how one gets to a node through the directed paths in the SDD. It
can be represented as a 2-row matrix, the first row listing a node sequence of all nodes that precede
it in the partial order, and the second row listing the corresponding realized states. The nextnode
function (in the last column) denotes the node that is realized after a node for each of its
states and for each minimal history. There are different kinds of histories. Minimal histories are
sufficient for defining node state spaces, probability distributions (for chance nodes), and next
node functions. For a decision node, the minimal histories will include those variables that affect
its state space, and its next-node function. For example, for D 2 , variable T is the only one under
these conditions. So, at node D 2 we have the minimal histories:
e
where - denotes the absence of T in a path to D 2 , i.e., when D nt. For a chance node, the
minimal histories will include the nodes that suffice for defining its next-node function, and its
conditional probability distribution. For example, for C, the set of minimal histories is the empty
set. For a value node, the minimal histories will include the nodes that suffice to define the values
of the corresponding utility function and they are the direct predecessors of u in the ID.
As we will see, minimal histories are not always sufficient to solve a decision problem. We
need a new kind of history called relevant history. The node sets of relevant histories contain the
node sets of minimal histories and are contained in the node sets of full histories. Also relevant
histories can be computed from minimal and full histories. Therefore, we do not show relevant
histories in the formulation, just full and minimal histories.
5.2 SDD Solution
Let w N (H N denote the maximum expected utility at node N of the SDD given history
optimal decisions are made at N (if N is a decision node) and from there onwards. Let
denote the set of nodes in H N . The solution technique is based on the same backward recursive
relations used in decision trees, but here we use a new kind of history called relevant
history. We cannot use only minimal histories because, when calculating w N (H N ), we may refer-
A Comparison of Graphical Techniques for Asymmetric Decision Problems 27
ence the next nodes n N and their histories H n N
, and w N (H N ) is not well defined if there exists at
least one n N such that 1(H n N
We obtain the node sets in the relevant histories
by enlarging the node sets in minimal histories by those SDD predecessors that appear in the
node sets of relevant histories of any of the direct successors nodes. Covaliu and Oliver [1995]
give a recursive definition of this term. The solution algorithm then follows a backward recursive
method. We will describe in detail a part of the solution-the reduction of C.
Reduction of Node C. Let w u denote the utility function associated with node uin the formulation
table, e.g.,
w
A
nt n
The relevant history for node C includes nodes D 1 and D 2 (since C's minimal history node set is
, C's successor is node u, u's minimal history node set is {D 1 , D 2 , C, A}, and the set of predecessors
of C is {D 1 , T, D 2 }).
nt c
nt c cs
nt c cf
nt c cs
nt c cf
Similarly, w C (
For further details, see Bielza and Shenoy [1998].
5.3 Strengths of SDDs
The main strength of SDDs is their ability to represent asymmetry compactly. A SDD can be
thought of as a compact (schematic) version of a DT. Thus, we get the intuitiveness of DTs
without their combinatorial explosion.
Like DTs, SDDs model asymmetry without adding dummy states to variables. This is in
contrast to IDs and VNs that enlarge the state spaces of some variables in order to represent
Bielza and Shenoy 28
asymmetry. For example, in the Reactor problem, the state space of T is {b, g, e} in the DT and
SDD representation, whereas it is {nr, b, g, e} in the ID and VN representation.
In the solution phase, SDDs avoid working on the space of all scenarios (or histories) by using
minimal and relevant histories. Thus, they can exploit coalescence automatically, which DTs
cannot.
The SDD technique can detect the presence of unnecessary information in a problem by
identifying irrelevant or barren nodes [Covaliu 1996]. This leads to a simplification of the original
model and to a corresponding decrease in the computational burden of solving it.
5.4 Weaknesses of SDDs
The main weakness of the SDD technique is its inability to represent probability models consis-
tently. It uses one distribution in the ID representation that is compatible with the SDD and a different
one in the formulation table. For example, in the Reactor problem, the state space of T in
the ID representation in Figure 5.2 includes nr whereas the state space of T in the formulation
table does not include nr.
Before one can complete a SDD formulation, including specifying a formulation table, it may
be necessary to preprocess the probabilities. This means that the ID has to be modified to make it
compatible with a corresponding SDD. In the reactor problem, there was only one incompatibility
requiring one arc reversal. In other problems, there may be many incompatibilities requiring
many arc reversals. In such problems, it is not clear which arcs one should reverse and in what
sequence so as to achieve compatibility at minimum computational cost. In large problems, the
lack of a formal method to translate a probability model from an ID to a formulation table may
make the SDD technique unsuitable for large problems requiring Bayesian revision of probabilities
In a formulation table, the nodes are linearly ordered by rows. This linear ordering is used
during the solution process-the variable in a row is reduced only after all the variables in succeeding
rows are reduced. Ideally, the ordering of nodes for reduction should belong to the solution
phase and not to the formulation phase. If an arbitrary linear order is chosen (compatible
A Comparison of Graphical Techniques for Asymmetric Decision Problems 29
with the partial order in SDD), there may be a computational penalty (see [Shenoy 1994a] for an
example of this phenomenon). This weakness is also shared by DTs.
5.5 Some Open Issues
As in DTs, SDDs require that a unique node be defined as a next node for each state of a variable
in a formulation table even though the problem may allow several nodes to qualify as next nodes.
The choice of which node should be a next node is a computational issue and it properly belongs
to the solution phase, not to the formulation phase. This issue perhaps can be resolved by allowing
the next node to be a subset instead of a single node. This strategy is advocated by Guo and
Shenoy [1996] in the context of Kirkwood's algebraic method.
The SDD technique tells us how to compute minimal history node sets and relevant history
node sets. However, once we have the node sets, we still have to generate the corresponding
histories. Generating these from a list of standardized histories is one possibility. However, for
large problems, the number of standardized histories is an exponential function of the number of
variables. Thus, we need procedures for generating minimal histories and relevant histories from
the corresponding node sets without actually listing all standardized histories. This is a task that
remains to be done.
A complete SDD formulation of a problem consists of an ID and a SDD at the graphical
level, and conditionals for the chance variables in the ID and a formulation table at the numerical
level. The formulation table is built partly from information from a compatible ID (e.g., the conditional
probability distributions), and partly from the SDD (e.g., the histories). Thus, there is
duplication of information. A complete ID representation is sufficient for solving the problem. A
corresponding SDD duplicates some of this information. And a formulation table that includes
standardized histories has all information required for solving the problem. Thus, a formulation
table duplicates information contained in an ID and a SDD. This issue can be resolved by developing
a solution technique that solves a decision problem directly from a SDD and a corresponding
ID representation without having the user specify a formulation table.
As currently described, the SDD technique tells us only how to represent a problem with a
Bielza and Shenoy
single undecomposed utility function or a problem in which the utility function decomposes into
factors whose domains include only one variable. The case of an arbitrary decomposition of the
utility function is not covered. Also, it is not clear when a SDD representation is well defined.
These are tasks that remain to be done.
6 Conclusion
The main goal of this work is to compare four distinct techniques proposed for representing and
solving asymmetric decision problems-traditional decision trees, SHM influence diagrams,
Shenoy's valuation networks, and Covaliu and Oliver's sequential decision diagrams. For each
technique, we have identified the main strengths, intrinsic weaknesses, and some open issues that
perhaps can be resolved with further research.
One conclusion is that no single technique stands out as always superior in all respects to the
others. Each technique has some unmatched strengths. Another conclusion is that considerable
work remains to be done to resolve the open issues of each technique. One possibility here is to
borrow the strengths of a technique to resolve the issues of another. Also, there is need for automating
each technique by building computer implementations, and there is very little literature
on this topic 2 .
We conclude with some speculative comments about the class of problems for which each
technique is appropriate. Decision trees are appropriate for small decision problems. Influence
diagrams are appropriate for problems in which we have a Bayesian network model for the un-
certainties. Valuation networks are appropriate for problems in which we have a non-Bayesian
network model for the uncertainties such as undirected graphs, chain graphs, etc. Finally, se-
2 There are several implementations of the decision tree technique, e.g.,
[www.palisade.com], and Supertree [www.sdg.com], several implementations of the symmetric influence diagram
technique, e.g., Hugin [www.hugin.dk], Netica [www.norsys.com], and Analytica [www.lumina.com], and one implementation
of the asymmetric influence diagram technique based on Call and Miller's [1990] technique, namely
DPL [www.adainc.com]. Currently, there are no implementations of either SHM's IDs or Shenoy's asymmetric VNs
or Covaliu and Oliver's SDDs. For details of other software for decision analysis, see Buede [1996].
A Comparison of Graphical Techniques for Asymmetric Decision Problems 31
quential decision diagrams are appropriate for problems for which we have a Bayesian network
model for the uncertainties such that no Bayesian revision of probabilities are required and for
which the utility function decomposes into factors whose domains are singleton variable subsets.
Acknowledgments
We are grateful to Jim Smith, Zvi Covaliu, David Ros Insua, Robert Nau, and an anonymous
reviewer for extensive comments on earlier drafts.
--R
"A Comparison of Graphical Techniques for Asymmetric Decision problems: Supplement to Management Science Paper,"
"Aiding insight III,"
"A comparison of approaches and implementations for automating decision analysis,"
"Representation and solution of decision problems using sequential decision diagrams,"
"Sequential diagrams and influence diagrams: A complementary relationship for modeling and solving decision problems,"
"Decision networks: A new formulation for multistage decision prob- lems,"
"Contingent influence diagrams,"
"A graphical method for solving a decision analysis problem,"
"A note on Kirkwood's algebraic method for decision prob- Bielza and Shenoy
"From influence diagrams to junction trees,"
"An algebraic approach to formulating and solving large models for sequential decisions under uncertainty,"
"On representing and solving decision problems,"
"Influence diagrams,"
"Potential influence diagrams,"
"Solving asymmetric decision problems with influence diagrams,"
"Evaluating influence diagrams,"
"Probabilistic inference and influence diagrams,"
"Decision making using probabilistic inference meth- ods,"
"Valuation-based systems for Bayesian decision analysis,"
"A new method for representing and solving Bayesian decision prob- lems,"
"Valuation network representation and solution of asymmetric decision problems,"
To appear in
"A comparison of graphical techniques for decision analysis,"
"Consistency in valuation-based systems,"
"Representing conditional independence relations by valuation networks,"
"A new pruning method for solving decision trees and game trees,"
Besnard and S.
"Representing and solving asymmetric decision problems using valuation networks,"
"Game Trees for Decision Analysis,"
"Axioms for probability and belief-function propagation,"
"Influence diagrams for Bayesian decision analysis,"
"Structuring conditional relationships in influence diagrams,"
"Dynamic programming and influence diagrams,"
"A computational theory of decision networks,"
--TR
--CTR
Liping Liu , Prakash P. Shenoy, Representing asymmetric decision problems using coarse valuations, Decision Support Systems, v.37 n.1, p.119-135, April 2004
M. Gmez , C. Bielza, Node deletion sequences in influence diagrams using genetic algorithms, Statistics and Computing, v.14 n.3, p.181-198, August 2004 | Asymmetric Decision Problems;decision trees;influence diagrams;Sequential Decision Diagrams;valuation networks |
339211 | Partitioning Customers Into Service Groups. | We explore the issues of when and how to partition arriving customers into service groups that will be served separately, in a first-come first-served manner, by multiserver service systems having a provision for waiting, and how to assign an appropriate number of servers to each group. We assume that customers can be classified upon arrival, so that different service groups can have different service-time distributions. We provide methodology for quantifying the tradeoff between economies of scale associated with larger systems and the benefit of having customers with shorter service times separated from other customers with longer service times, as is done in service systems with express lines. To properly quantify this tradeoff, it is important to characterize service-time distributions beyond their means. In particular, it is important to also determine the variance of the service-time distribution of each service group. Assuming Poisson arrival processes, we then can model the congestion experienced by each server group as an M/G/s queue with unlimited waiting room. We use previously developed approximations for M/G/s performance measures to quickly evaluate alternative partitions. | Introduction
In this paper we consider how to design service systems. We assume that it is possible
to initially classify customers according to some attributes. We then consider partitioning
these customer classes into disjoint subsets that will be served separately, each in a first-come
first-served manner. We assume that all customer classes arrive in independent Poisson
processes. Thus the arrival process for any subset in the partition, being the superposition
of independent Poisson processes, is also a Poisson process. Each arriving customer receives
service from one of the servers in his service group, after waiting if necessary in a waiting
room with unlimited capacity. We assume that the service times are mutually independent
with a class-dependent service-time distribution. Hence, we model the performance of each
subset as an M/G/s service system with s servers, an unlimited waiting space and the first-come
first-served service discipline. The problem is to form a desirable partition and assign an
appropriate number of servers to each subset in the partition.
If all the service-time distributions are identical, then it is more efficient to have aggregate
systems, everything else being equal; e.g., see Smith and Whitt (1981) and Whitt (1992).
(See Mandelbaum and Reiman (1997) for further work on resource sharing.) Thus, with
common service-time distributions we should select a single aggregate system. However, here
we are interested in the case of different service-time distributions. With different service-time
distributions, the service-time distributions are altered in the partitioning process. With
different service-time distributions, there is a tradeoff between the economies of scale gained
from larger systems and the cost of having customers with shorter service times have their
quality of service degraded by customers with longer service times. Thus, there is a natural
motivation for separation, as in the express checkout lines in a supermarket.
When the different classes have different service-time distributions, the service-time distribution
for each subset in the partition is a mixture of the component service-time distributions.
This makes the mean just the average of the component means. If the component means are
quite different, though, then the subset service-time distribution will tend to be highly variable,
e.g., as reflected by its squared coefficient of variation (SCV, variance divided by the square
of its mean). This high variability will in turn tend to degrade the performance of the M/G/s
queue for the subset.
The customers may initially be classified in many ways. One way that we specifically want
to consider is by service time. It may happen that the customers' service requirements are
known (or at least can be accurately estimated) upon arrival. Then we can consider classifying
the customers according to their service times. We can then partition the positive halfline
into finitely many disjoint subintervals and let customers with service times in a common
subinterval all belong to the same class. Clearly, partitioning customers according to service
times tends to reduce variability; i.e., the variability within each class usually will be less than
the overall variability. (For ways to formalize this, see Whitt (1985a).)
Since there are (infinitely) many possible ways to partition service times into subintervals,
there are (infinitely) many possible designs. Thus, we want a quick method for evaluating
candidate designs. For that purpose, we propose previous M/G/s approximations, as in Whitt
(1992, 1993).
When the customer classes are specified at the outset, it is natural to formulate our design
problem as an optimization problem. The goal can be to minimize the total number of servers
used, while requiring that each class meet a specified performance requirement, e.g., the steady-state
probability that a class-i customer has to wait more than d i should be less than or equal
to p i for all i. These requirements might well not be identical for all classes. It is natural to
measure the waiting time (before beginning service) relative to the service time or expected
service time; i.e., customers with longer service times should be able to tolerate longer waiting
times. The alternatives that must be considered in the optimization problem are the possible
partitions that can be used and the numbers of servers that are used in the subsets. When
there are not many classes, this optimization problem can be easily solved with the aid of the
approximations by evaluating all (reasonable) alternatives.
Larger problems can be solved approximately by exploiting two basic principles. First, the
advantage of partitioning usually stems from separating short service times from long ones.
Thus, we should tend to put classes with similar service-time distributions in the same subset.
In many cases (as when we partition according to service times), it will be possible to rank
order the classes according to the usual size of service times. Then it may be reasonable
to restrict attention to partitions in which no two classes appear together unless all classes
ranked in between these two also belong. Then partitions can be easily characterized by their
boundary point in the ordering.
The second principle is that we should not expect to have a very large number of subsets
in the partition, because a large number tends to violate the efficiency of large scale. Thus
it is natural to only look for and then compare the best (or good) partitions of size 2, 3, 4,
and 5, say. For example, it is natural to consider giving special protection to one class with
the shortest service times; e.g., express lanes in supermarkets. It is also natural to consider
protecting the majority of the customers from the customers with the largest service times;
e.g., large file transfers over the internet. If only these two objectives are desired, then only
three classes are needed. It is not difficult to examine candidate pairs of boundary points
within a specified ordering.
In this paper we assume that the customer service-time distributions are unaffected by the
partitioning, but in general that need not be the case. Combining classes might actually make it
more difficult to provide service, e.g., because servers may need different skills to serve different
classes. This variation might be analyzed within our scheme by introducing parameters
for each pair of classes (i; We could then have each service time of a customer
of class i multiplied by j ij if classes i and j belong to the same subset in the partition. This
would cause the mean to be multiplied by j ij but leave the SCV unchanged. This modification
would require recalculation of the two service-time moments for the subsets in the partition,
but we still could use the M/G/s analysis described here. More general variants take us out
of the M/G/s framework, and thus remain to be considered.
Here is how the rest of this paper is organized. In Section 2 we review simple approximations
for M/G/s performance measures. In Section 3 we indicate how to calculate the parameters of
the subset service-time distributions when we partition according to service times. In Section 4
we indicate how to calculate service-time parameters when we aggregate classes. In Section 5
we indicate how we can select a reasonable initial number of servers for an M/G/s system, after
which we can tune for improvement. In Section 6 we illustrate the advantages of separating
disparate classes by considering a numerical example with three classes having very different
service-time distributions. In Section 7 we illustrate the potential advantage of partitioning
according to service requirement by considering a numerical example with a Pareto service-time
distribution. We split the Pareto distribution into five subintervals. We show there that
the partition may well be preferred to one aggregate system. In Section 8 we briefly discuss
other model variants; e.g., we point out that the situation is very different when there is no
provision for waiting. Finally, in Section 9 we state our conclusions.
2. Review of M/G/s Approximations
In this section we review basic approximations for key performance measures of M/G/s
systems. We want to be able to quickly determine the approximate performance of an M/G/s
system, so that we can quickly evaluate possible partition schemes. We shall be concerned with
the steady-state probability of having to wait, P (W ? 0), and the steady-state conditional
expected wait given that the customer must wait, E(W jW ? 0). The product of these two
is of course the mean steady-state wait itself, EW . We shall also want the steady-state tail
probability desired t. Relevant choices of t typically depend on the mean
service time, here denoted ES.
The basic model parameters are the number of servers s, the arrival rate - and the service-time
cdf G with k th moments m k , k - 1. The traffic intensity is we assume that
so that a proper steady state exists. This condition puts an obvious lower bound on
the number of servers in each service group. The service-time SCV is c 2
1 .
We propose using approximations for the M/G/s performance measures. We could instead
use exact M/M/s (Erlang C) formulas involving exponential service-time distributions with the
correct mean, but we advise not doing so because it is important to capture the service-time
distribution beyond its mean via its SCV. (This is shown in our examples later.) On the other
hand, we could use simulation or more involved numerical algorithms, as in de Smit (1983),
Seelen (1986) and Bertsimas (1988), to more accurately calculate the exact performance mea-
sures, but we contend that it is usually not necessary to do so, because the approximation
accuracy tend to be adequate and the approximations are much more easy to use and under-
stand. The adequacy of approximation accuracy depends in part on the intended application
to determine the required number of servers. A small change in the number of servers (e.g., by
one) typically produces a significant change in the waiting-time performance measures. This
means that the approximation error has only a small impact on the decision. Moreover, the
approximation accuracy is often much better than our knowledge of the underlying model parameters
(arrival rates and service-time distributions). If, however, greater accuracy is deemed
necessary, then one of the alternative exact numerical algorithms can be used in place of the
approximations here.
We now specify the proposed approximations. First recall that the exact conditional wait
for M/M/s is
E(W (M=M=s)jW (M=M=s) ?
which is easy to see because an M/M/s system behaves like an M/M/1 system with service
rate s=m 1 when all s servers are busy. As in Whitt (1993) and elsewhere, we approximate the
conditional M/G/s wait by
E(W (M=G=s)jW (M=G=s) ?
Following Whitt (1993) and references cited there, we approximate the probability of delay
in an M/G/s system by the probability of delay in an M/M/s system with the same traffic
intensity ae, i.e.,
where
with
Algorithms are easily constructed to compute the exact M/M/s delay probability in (2.4).
However, we also propose the more elementary heavy-traffic approximation from Halfin and
Whitt (1981),
where \Phi is the standard (mean 0, variance 1) normal cdf and
As shown in Table 13 of Whitt (1993), approximation (2.6) is quite accurate except for the
cases in which both s is large and ae is small (and the delay probability itself is small). An
alternative simple approximation for the M/M/s delay probability is the Sakasegawa (1977)
approximation
(Combine (2.9) and (2.12) of Whitt (1993).) We do not attempt to evaluate these approximations
here, because that has already been done. The approximation errors from using (2.6)
and (2.8) instead of the exact formula are displayed for a range of cases in Tables 1 and 13 of
Whitt (1993).
When we combine (2.2) and (2.3), we obtain the classic Lee and Longton (1959) approximation
formula for the mean, i.e.,
which we complete either with an exact calculation or approximation (2.6) or (2.8). We
may know roughly what the probability of delay will be; e.g., we might have
0:25. Then we can obtain an explicit back-of-the-envelope approximation by substituting
that approximation into (2.9). A simple heavy-traffic approximation is obtained by letting
equivalently, EW - (W jW ? 0), which amounts to
using (2.2). Then for fixed ae, there is a clear tradeoff between s (scale) on the one hand and
combination of mean and variability of the service-time distribution) on the other
hand.
We can approximate the tail probability roughly by assuming that the conditional delay is
exponential, i.e.,
Approximation (2.10) is exact for M/M/s, but not more generally. Approximation (2.10) is
also consistent with known heavy-traffic limits. The accuracy of (2.10) is often adequate.
Refinements can be based on the asymptotic behavior as t ! 1; e.g., see Abate, Choudhury
and Whitt (1995).
3. Splitting by Service Times
Suppose that we are given a single M/G input with arrival rate - and service-time cdf G.
We can create m classes by classifying customers according to their service times, which we
assume can be learned upon arrival. We use
We say that an arrival belongs to class i if its
service time falls in the interval 1. For class 1 the interval is [0; x 1 ]; for
class m the interval is
Since the service times are assumed to be independent and identically distributed (i.i.d),
this classification scheme partitions the original Poisson arrival process into m independent
Poisson arrival processes. Thus one M/G input has been decomposed into m independent
M/G inputs (without yet specifying the numbers of servers).
The arrival rate of class i is thus
(regarding G(0) as and the associated service-time cdf is
The k th moment of G i is
x
It is significant that the moments of the split cdf's can be computed in practice, as we now
Example 3.1. (exponential distributions). Suppose that the cdf G is exponential with
so that the density is
If G i is G restricted to the interval
The first two moments of G i are then
and
Example 3.2. (Pareto distributions). Suppose that the cdf G is Pareto with decay rate
ff, i.e., so that the complementary cdf is
and the density is
A Pareto distribution is a good candidate model for relatively more variable (long-tailed)
service-time distributions. The mean is infinite if ff - 1. If ff ? 1, then the mean is
then the variance is infinite. If ff ? 2, then the SCV is
If G i is G restricted to the interval
We can apply formulas (3.12) and (3.13) to calculate the first two moments of G i . They are
ff
and
ff
As indicated after (2.9), we can do a heavy-traffic analysis to quickly see the benefits of
service-time splitting. Suppose that we allocate servers proportional to the offered load, so
that ae
ae
Then, by (2.2),
and
for EW in (3.15). Hence, EW
This simple analysis shows
that important role played by service-time variability, as approximately described by the SCVs
s and c 2
si .
Remark 3.1. When we split service times, we expect to have c 2
s , but that need not be
the case. First, if G is uniform on [0; x], then G i is uniform on
1=3. Second, suppose that G assigns probabilities ffl=2, ffl=2 and 1 \Gamma ffl to 0, x 1 and x
so that we can have c 2
s .
4. Aggregation
Suppose that we are given m independent M/G inputs with arrival rates - i and service-time
m. Then the m classes can be combined
(aggregated) into a single M/G input with arrival rate the sum of the component arrival rates,
i.e.,
and service-time cdf a mixture of the component cdf's, i.e.,
having moments
It should be evident that if a single M/G input is split by service times as described in
Section 3 and then recombined, we get the original M/G input characterized by - and G back
again.
5. Initial Numbers of Servers
In this section we indicate how to initially select the number of servers in any candidate
M/G/s system. Our idea is to use an infinite-server approximation, as in Section 2.3 of Whitt
(1992) or in Jennings, Mandelbaum, Massey and Whitt (1996). In the associated M/G/1
system with the same M/G input, the steady-state number of busy servers has a Poisson
distribution with mean (and thus also variance) equal to the offered load (product of arrival
rate and mean service time, say !. The Poisson distribution can then be approximated by a
normal distribution. We thus let the number of servers be the least integer greater than or
equal
!, which is c standard deviations above the mean. A reasonable value of the
constant is often and we will use it. Then the number of servers is
A rough estimate (lower bound) for the probability of delay is then
where N(a; b) denotes a normal random variables with mean a and variance b, and \Phi c is the
complementary cdf of N(0; 1), i.e., \Phi c choice tends to keep the
waiting time low with the servers well utilized. Of course, the number of standard deviations
above the mean and/or the resulting number of servers can be further adjusted as needed.
6. A Class-Aggregation Numerical Example
In this section we give a numerical example illustrating how to study the possible aggregation
of classes into service groups. We let the classes have quite different service-time
distributions to demonstrate that aggregation is not always good. In particular, we consider
three classes of M/M input, each with common offered load 10. Classes 1, 2 and 3 have
arrival-rate and mean-service-time pairs (-
spectively. Each class separately arrives according to a Poisson process and has exponential
service times. Thus each class separately yields an M/M/s queue when we specify the number
of servers.
We consider all possible aggregations of the classes, namely, the subsets f1; 2g, f1; 3g,
f2; 3g and f1; 2; 3g as well as the classes separately. The arrival rates and offered loads of the
subgroups are just the sums of the component arrival rates and offered loads. However, the
aggregated subgroups differ qualitatively from the single classes because the service-time distributions
are no longer exponential. Instead, the service-time distributions of the aggregated
subgroups are mixtures of exponentials (hyperexponential distributions) with SCVs greater
than 1. The penalty for aggregation is initially quantified by the service-time SCV. The
service-time SCVs for classes f1; 2g, f1; 3g, f2; 3g and f1; 2; 3g are 5.05, 50.0, 5.05 and 26.4,
respectively. Consistent with intuition, from these SCVs, we see that the two-class service
group f1; 3g should not be as attractive as the other two-class service groups f1; 2g and f2; 3g.
We use the scheme in Section 5 to specify the number of servers. In particular, in each case
we let s be approximately
is the offered lead. Thus, for each class separately
we let for the two-class subgroups we let and for the entire three-class set we
more servers are used with smaller groups, we also consider the three-class
set with 39 servers, which is the sum of the separate numbers assigned to the separate classes.
We display the performance measures calculated according to Section 2 in Table 1. From
classes in server group
ES 1:0 10:0 100:0 1:818 1:980 18:18 2:703
ae 0:7692 0:7692 0:7692 0:800 0:800 0:800 0:833 0:769
s 1:0 1:0 1:0 5:05 50:0 5:05 26:4
EW 0:108 1:08 10:8 0:28 2:53 2:75 1:54 0:51
\Gamma131
\Gamma13
\Gamma5
\Gamma5
\Gamma6
\Gamma12
Table
1: Performance measures for the three classes separately and all possible aggregated
subsets in the example of Section 6.
Table
1, we see that the mean wait is about 10% of the mean service time for each class
separately. Also the probability that the wait exceeds one mean service time, P (W ? ES) is
for each class separately. In contrast, these performance measures degrade substantially
for the class with the shorter service times after aggregation. Consistent with intuition, the
performance for service group f1; 3g is particularly bad. The full aggregate service group
containing classes f1; 2; 3g performs better with 39 servers than 36, but in both cases the
performance for class 1 is significantly worse than the performance for class 1 separately.
The main point is that the approximations in Section 2 provide a convenient way to study
possible aggregations. Given a specification of performance requirements, e.g., delay constraints
it is possible to find the minimum number of
servers satisfying all the constraints (exploiting the best aggregation scheme). For example,
having the three classes separate is optimal. For the classes
separately with 12 and 14 servers, P respectively. Hence,
as indicated in Section 2, a unit change in the number of servers makes a big change in the
performance measures. The total number of servers required for the aggregate system to have
0:20 is 46, seven more than with the three classes separate.
This example also illustrates the importance of considering the service-time distribution
beyond its mean. If we assume that the aggregate system were an M/M/s system, then
the service-time SCV would be 1 instead of 26.4. Approximations (2.2), (2.3), (2.8) and (2.9)
indicate that using M/M/s model instead would underestimate the correct mean approximately
by a factor of 13.7. Using the M/M/s model for the aggregate system, we would deduce that
we only needed 37 servers in order to have 0:015). We
would also wrongly conclude that the aggregate system is better than the separate classes.
7. A Pareto-Splitting Numerical Example
In this section we illustrate the service-time partitioning by considering a numerical example
in which we split Pareto service times. We start with an M/G input consisting of a
Poisson arrival process having arrival rate and a Pareto service-time distribution as
in Example 3.2 with that it has mean 1 and SCV c 2
The offered load is 100, so that the total number of servers must be at least 101 in order
to have a stable system. Using the initial sizing formula in Section 5, we would initially let
This yields a probability of delay of P (W ?
mean delay of E(W jW ? and a mean delay of
However, the median of the chosen Pareto distribution is 0.43, so that 50% of the service
times are less than 0.43. Indeed, the conditional mean service time restricted to the interval
[0; 0:43] is 0:179. The conditional mean wait of 1.21 is about 6.8 times this mean; the
actual mean wait 0.343 is about 2 times the mean service time. These mean waits might be
judged too large for the customers with such short service requirements. Thus, assuming that
we know customer service requirements upon arrival, we might attempt to make waiting times
more proportional to service times by partitioning the customers according to their service-time
requirements.
Here we consider partitioning the customers into five subsets using the boundary points
1000. The first two boundary points were chosen to be the 50 th and 90 th
percentiles of the service-time distribution, while the last two boundary points were chosen
to be one and three orders of magnitude larger than the overall mean 1, respectively. In
particular, from formula (3.11), we find that the probabilities that a service time falls into the
interval (0; 0:43), (0:43; 2:2), (2:2; 10), (10; 1000) and (1000; 1) are 0.50, 0.40, 0.092, 0.0078
and 0:61 \Theta 10 \Gamma6 , respectively.
For each subinterval, we calculate the conditional mean and second moment given that
the service time falls in the subinterval using formulas (3.12) and (3.13), thus obtaining the
subinterval mean and second moment. The subinterval SCV is then obtained in the usual way.
We display these results in Table 2. Note that these subgroup service-time SCVs are indeed
much smaller than the original overall Pareto SCV of c 2
Given the calculated characteristics for each subinterval, we can treat each subinterval as a
separate independent M/G/s queue. The arrival rate is 100 times the subinterval probability.
The offered load, say !, is the arrival rate times the mean service time. Using the initial-sizing
formula in Section 5, we let the number of servers in each case be the least integer greater than
!. We regard this value as an initial trial value that can be refined as needed. Finally,
the traffic intensity ae is just the offered load divided by the number of servers, i.e., ae = !=s.
We display all these results in Table 2.
Next we describe the performance of each separate M/G/s queue using the formulas in
Section 2. For simplicity, here we use (2.8) for the probability of delay. Since we have chosen s
in each case to be about
!, it should be no surprise that the delay probability is nearly
the same for all groups except the last. In the last subinterval, the offered load is only 0:117,
so only one server is assigned, and the normal approximation is clearly inappropriate.
From
Table
2, we see that the mean wait EW i for each class i is substantially less than
the mean service time of that subclass. We also calculate the probability that the waiting
time exceeds the mean service time of that class, using approximation (2.10). For all classes,
consistently small.
service-time intervals
probability 0:5000 0:4004 0:0918 0:0078 0:00000061
subgroup
mean 0:1787 0:9811 3:935 19:94 1910
arrival rate 49:99 40:04 9:18 0:78 0:000061
offered load 8:94 39:28 36:12 15:54 0:117
servers 12 46 42 20 1
ae i 0:745 0:853 0:864 0:778 0:117
Table
2: Service-time characteristics and M/G/s performance measures when the original
Pareto service times are split into five subgroups.
Now we consider what happens if we aggregate some of the subgroups. First, we consider
combining the last two subgroups. We keep the total number of servers the same at 21. If
we group the last two classes together, then the new service time has mean 20.09 and SCV
5.29. Note that, compared to the (10, 1000) class, the mean has gone up only slightly from
19.94, but the SCV has increased significantly from 1.25. (The SCV is even bigger than it
was for the highest group.) The M/G/s performance measures for the new combined class
are
This combination might be judged acceptable, but the performance becomes degraded for the
customers in the (10, 1000) subgroup.
Finally, we consider aggregating all the subgroups. If we keep the same numbers of servers
assigned to the subgroups, then we obtain 121 servers instead of 110. This should not be
surprising, because the
algorithm should produce fewer extra servers with one large
group than with five subgroups. However, it still remains to examine the performance of the
original system when
:0094. The delay probability
is clearly better than with the partition, as it must be using approximation (2.3), but the
conditional mean wait is worse for the first three subgroups, and much worse for the first two.
The overall mean EW is worse for the first two subgroups, and much more for the first one.
The tail probabilities are much worse for the first two subgroups as well. Hence,
even with all 121 servers, performance in the single aggregated system might be considered far
inferior to performance in the separate groups for the first two groups.
Finally, we can clearly see here that an M/M/s model fails to adequately describe the
performance. By formulas (2.2), (2.3), (2.8) and (2.9), we see that the mean EW would be
underestimated by a factor of 11 in the aggregate system, and overestimated somewhat for
the first three service groups. Moreover, and we would incorrectly conclude that the aggregate
system must be better.
Remark 7.1. The Pareto service-time distribution in the example we have just considered
has finite variance since 2. Similar results hold if the service-time distribution has
infinite variance or even infinite mean. When the service-time distribution has finite mean but
infinite variance (when 1 ! ff - 2), the service-time variance is finite for all subclasses but the
last because of truncation. The service-time distribution for the last class then has finite mean
and infinite second moment. In the example here with one server assigned to the last class, we
then have 1. When the mean service-time is infinite for the
last class (when ff - 1), the waiting times for that class diverge to +1. However, the other
classes remain well behaved. Clearly, the splitting may well be deemed even more important
in these cases.
8. Other Model Variants
So far, we have considered service systems with unlimited waiting space. A very different
situation occurs when there is no waiting space at all. The steady-state number of busy servers
in an M/G/s/0 loss model has the insensitivity property; i.e., the steady-state distribution of
the number of busy servers depends on the service-time distribution only through its mean.
Thus, the steady-state distribution in the M/G/s/0model coincides with the (Erlang B) steady-state
distribution in the M/M/s/0 model with an exponential service-time distribution having
the same mean. Thus, the full aggregated system is always more efficient for loss systems, by
Smith and Whitt (1981).
Similarly, if there is extra waiting space, but delays are to be kept minimal, then it is natural
to use the M/G/1 model as an approximation, which also has the insensitivity property.
Hence, if our goal can be expressed in terms of the distribution of the number of busy servers
in the M/G/1 model, then we should again prefer the aggregate system.
Even for the M/G/s delay model, our approximation for the probability of experiencing any
wait in (2.3) has the insensitivity property. Hence, if our performance criterion were expressed
in terms of the probability of experiencing any wait, then we also should prefer the aggregate
system. In contrast, separation can become important for the delays, because the service-time
distribution beyond its mean (as described by the SCV) then matters, as we have seen.
So far, we have considered a stationary model. However, in many circumstances it is more
appropriate to consider a nonstationary model. For example, we could assume a nonhomogeneous
Poisson arrival process, denoted by M t , for each customer class. It is important to
note that the insensitivity in the M/G/s/0 and M/G/1 models is lost when the arrival process
becomes M t ; see Davis, Massey and Whitt (1995). The added complexity caused by the
nonstationarity makes it natural to consider the M t =G=1 model as an approximation. Since
insensitivity no longer holds, full aggregation is not necessarily most efficient. Partitioning in
this nonstationary setting can also be conveniently analyzed because the partitioning of non-homogeneous
Poisson processes produces again nonhomogeneous Poisson processes. Hence, all
subgroups behave as M t =G=s systems. For example, the server staffing and performance calculations
for each subset can be performed by applying the approximation methods in Jennings,
Mandelbaum, Massey and Whitt (1996). The formula for the mean number of busy servers at
time t in (6) there shows that the service-time distribution beyond the mean plays a role, i.e.,
where -(t) is the arrival-rate function and S e is a random variable with the service-time
equilibrium-excess distribution, i.e.,
ES
also see Eick, Massey and Whitt (1993). The linear approximation
in (8) of Eick et al. shows the first-order effect of the service-time SCV.
So far, we have only considered Poisson arrival processes. We chose Poisson arrival processes
because, with them, it is easier to make our main points, and because they are often
reasonable in applications. However, we could also employ approximation methods to study the
partitioning of more general (stationary) G/G inputs. In particular, we could use approximations
for aggregating and splitting of arrival streams in the queueing network analyzer (QNA)
in Whitt (1983) to first calculate an SCV for the arrival process of each server group and then
calculate approximate performance measures. When we go to this more general setting, the
arrival-process variability then also has an impact. With non-Poisson arrival processes, the
partitioning problem nicely illustrates how a performance-analysis software tool such as QNA
can be conveniently applied to study design problem. In that regard, this paper parallels our
application of QNA to study the best order for queues in series in Whitt (1985b).
It should be noted, however, that the QNA formulas for superposition (aggregation) and
splitting assume independence. The independence seems reasonable for aggregation, but may
fail to properly represent splitting. For aggregation, the assumed independence is among the
arrival processes for the different classes to be superposed, which we have already assumed in
the Poisson case.
For splitting, we assume that the class identity obtained by splitting successive arrivals are
determined by independent trials. Thus, if c 2
a is the original arrival-process SCV and p i is the
probability that each arrival belongs to class i, then the resulting approximation for the class-i
i from Section 4.4 of Whitt (1983) is
which approaches the value 1 as exact for renewal processes and is
consistent with limits to the Poisson for more general stationary point processes. However, in
applications it is possible that burstiness (high variability) may be linked to the class attributes,
so that a cluster of arrivals in the original process all may tend to be associated with a common
class. That means the independence condition would be violated. Moreover, as a consequence,
the actual SCV's associated with the split streams should be much larger than predicted by
(8.4). In such a situation it may be better to rely on measurements, as discussed in Fendick
and Whitt (1989).
9. Conclusions
We have shown how to evaluate the costs and benefits of (1) partitioning an M/G/s system
into independent subsystems by classifying customers according to their service times, assuming
that they can be estimated upon arrival, and (2) combining independent M/G/s systems with
different service-time distributions into larger aggregate M/G/s systems. When the service-time
distributions are nearly the same in component systems, then greater efficiency usually
can be obtained by combining the systems as indicated in Smith and Whitt (1981). On the
other hand, if the service-time distributions are very different, then it may be better not to
combine the systems. Previously established simple approximations for M/G/s performance
measures make it possible to evaluate alternatives quantitatively very rapidly. Afterwards, the
conclusions can be confirmed by more involved numerical algorithms, computer simulations or
system measurements.
--R
"Exponential Approximations for Tail Probabilities, I: Waiting Times,"
"An Exact FCFS Waiting Time Analysis for a General Class of G/G/s Queueing Systems,"
"Sensitivity to the Service-Time Distribution in the Nonstationary Erlang Loss Model,"
"The Physics of the M t =G=1 Queue,"
"Heavy-Traffic Limits for Queues with Many Exponential Servers,"
"Server Staffing to Meet Time-Varying Demand,"
"Queueing Processes Associated with Airline Passengers Check- In,"
"On Pooling in Queueing Networks,"
"An Approximation Formula L
"An Algorithm for Ph/Ph/c Queues,"
"A Numerical Solution for the Multi-Server Queue with Hyperexponential Service Times,"
"Resource Sharing for Efficiency in Traffic Systems,"
"The Queueing Network Analyzer,"
"Uniform Conditional Variability Ordering of Probability Distributions,"
"The Best Order for Queues in Series,"
"Understanding the Efficiency of Multi-Server Service Systems,"
"Approximations for the GI/G/m Queue,"
--TR
--CTR
Hui-Chih Hung , Marc E. Posner, Allocation of jobs and identical resources with two pooling centers, Queueing Systems: Theory and Applications, v.55 n.3, p.179-194, March 2007
Gans , Yong-Pin Zhou, Call-Routing Schemes for Call-Center Outsourcing, Manufacturing & Service Operations Management, v.9 n.1, p.33-50, January 2007 | resource sharing;multiserver queues;service-system design;queues;Service Systems with Express Lines;service systems |
339353 | A Multigrid Algorithm for the Mortar Finite Element Method. | The objective of this paper is to develop and analyze a multigrid algorithm for the system of equations arising from the mortar finite element discretization of second order elliptic boundary value problems. In order to establish the inf-sup condition for the saddle point formulation and to motivate the subsequent treatment of the discretizations, we first revisit briefly the theoretical concept of the mortar finite element method. Employing suitable mesh-dependent norms we verify the validity of the Ladyzhenskaya--Babuska--Brezzi (LBB) condition for the resulting mixed method and prove an L2 error estimate. This is the key for establishing a suitable approximation property for our multigrid convergence proof via a duality argument. In fact, we are able to verify optimal multigrid efficiency based on a smoother which is applied to the whole coupled system of equations. We conclude with several numerical tests of the proposed scheme which confirm the theoretical results and show the efficiency and the robustness of the method even in situations not covered by the theory. | Introduction
The mortar method as a special domain decomposition methodology appears to be
particularly attractive because different types of discretizations can be employed in
different parts of the domain. It has been analyzed in a series of papers [5, 6, 20]
mainly in connection with second order elliptic boundary value problems of the form
@n
where a(x) is a (sufficiently smooth) uniformly positive definite matrix in the bounded
domain\Omega ae R d , \Gamma D is a subset of the boundary \Gamma
Suppose
that\Omega is decomposed into non-overlapping
(\Omega\Gamma denote the usual Sobolev spaces endowed with the Sobolev norms k \Delta k
s;\Omega ,
0;D be the closure in H 1 of all C 1 -functions vanishing on \Gamma D . Although
this is not our motivation, a common approach to facilitate parallel computations is
to seek for a variational formulation of (1.1) with respect to the product space
1(\Omega
endowed with the norm
The space H 1
0;D is determined as a subspace of X ffi by appropriate linear con-
straints. Corresponding discretizations lead to saddle point problems. The central
objective of this paper is to develop a multigrid method for the efficient solution of
such indefinite systems of equations. According to standard multigrid convergence
theory the main tasks are to establish appropriate approximation properties in terms
of direct estimates as well as to design suitable smoothing procedures which give rise
to corresponding inverse estimates.
The derivation of these multigrid ingredients is, of course, based on the stability
of the discretizations which in turn hinges on a proper formulation of the continuous
problem (1.1). In this regard the subspace of those functions for which the jumps
across the interfaces of neighboring subdomains belong to the trace space H 1=2
a more suitable framework than the full space X ffi . Since H 1=2
00 is endowed with a
strictly stronger norm than H 1=2 , the concealed extensions of trace functions to the
domain require special care in the study of mortar elements. In fact, a complete
verification of the inf-sup condition is usually circumvented.
In order to establish the facts required for the above objectives in a coherent
manner we briefly reformulate in Section 2 first an analytical basis of the mortar
method. A concept with an explicit relation to the trace spaces H 1=2
00 has been given
in the recent investigation [4] which also contains a proof of the inf-sup condition.
Our present considerations, in particular a verification of the ellipticity in the discrete
case leads us to adjust the norms and to use mesh-dependent norms as in [20]. Once
a decision on the norms has been done, the analysis proceeds almost independently
of the fact whether it is done in the framework of saddle point problems or of the
theory of nonconforming elements.
The results of Sections 2 and 3 will be used in Section 4 to estimate the convergence
of the solutions of the discrete problems in the L 2 norm. This will serve
as a crucial ingredient of the multigrid convergence analysis in Section 5. Finally,
in Section 6 we present some numerical experiments, demonstrating h-independent
convergence rates for domains with one and more cross-points, distorted grids, different
mesh sizes at the interfaces and strongly varying diffusion constants.
The extension to more robust smoothers, a comparison with other transforming
iterations (cf. [19, 2]) and the application to noncontinuous cross-points and other
mortar situations (cf. [18]) will be presented in a forthcoming paper.
We will always denote by c a generic constant which does not depend on any of
the parameters involved in respective estimates but may assume different values at
each occurrence.
2 The Continuous Problem - A Characterization
of H 1
For simplicity we will assume throughout the rest of the paper that the domain
\Omega ae R d as well as the
subdomains\Omega k in (1.2) are polygonal.
and\Omega l share a
common interface, we set -
-\Omega l . The interior faces form the skeleton
k;l
will always be assumed to be the union of polygonal subsets of the
boundaries of
the\Omega k . Often such a decomposition is called geometrically conforming.
0;D is characterized as a subspace of X ffi , the space
is encountered. Here H(div; \Omega\Gamma is the space of all vector fields in L 2
d whose (weak)
divergence is in L
denotes the outward normal and q \Delta p is the standard scalar
product on R d . We recall from Proposition III.1.1 in [12] that
and the domain decomposition concept discussed in [12] is based upon (2.2). Note
that this characterization involves constraints with q 2 H 0;N (div; \Omega\Gamma which are global
in the sense that their restrictions to any interface \Gamma kl does not necessarily define
a bounded functional on the corresponding restriction of traces. By contrast the
mortar method attempts to reexpress (2.2) in terms of the jumps [v] of
across the interfaces \Gamma kl as
where
This, however, suggests that the individual terms (q
are well-defined
which requires restricting the jumps [v]. While this has been done on a discrete
level in [5, 6], a rigorous analysis has to be based on a corresponding continuous
formulation which requires suitable Sobolev spaces for the functions that live on the
skeleton S.
To this end, recall that for any (sufficiently regular) manifold \Gamma the Sobolev
spaces H s (\Gamma) can be defined by their intrinsic norms (see e.g. [14], Section 1.1.3 or
[16], Section 7.3 and [7] for a short summary tailored to the needs of finite element
discretizations) or alternatively, when \Gamma is part of a boundary, as a trace space. In
is not an integer,
kvk
@\Omega =v
kwk
is an equivalent norm for H s\Gamma1=2
(@\Omega\Gamma2 see e.g. [14], Theorem 1.5.1.2. Here and in
the sequel we will not distinguish between equivalent norms for the same space.
is a smooth subset of \Gamma, H s
consists of those elements
trivial extension ~
v by zero to all of \Gamma belongs to H s (\Gamma), cf. [16], p. 66. In the present
context the spaces H 1=2
are relevant, i.e.,
strictly contained continuously embedded subspace of H 1=2 (\Gamma 0 ), see e.g.
[14], Corollary 1.4.4.5 or [16], Theorem 11.4. By definition,
kzk
For later use we will also record the characterization of H 1=2
as an interpolation
space between L 2 (\Gamma kl ) and H 1
while
This can be realized, for instance, by the K-method [16], pp. 64-66, pp. 98-99.
Now we return to the characterization of H 1
0;D(\Omega\Gamma3 Whereas the constraints in
are non local, restricting the jumps [v] of elements in X ffi in a way that their
restriction to \Gamma kl belongs to H 1=2
indeed they become local. For this concept
the constraints are to be described through the product space
where (H 1=2
denotes the dual of H 1=2
suggests considering the
space [4]
endowed with the norm
The norms that will be introduced later for the treatment of the finite element
discretization are better understood from this norm than from (1.4).
Remark 2.1 X 00 is not a Hilbert space with respect to k \Delta k 1;ffi because H 1=2
not a closed subspace of H 1=2 (\Gamma kl ) with respect to the norm k \Delta k 1=2;\Gamma kl
Proof: By the remarks in the previous section, there exists a sequence fw n g
of C 1 functions with compact support in \Gamma kl such that kw n k 1=2;\Gamma kl
1. Since the w n admit uniformly bounded extensions
to all of
@\Omega k one can, in view of (2.5), construct a sequence of functions v n in
X 00 for which kv n k 1;ffi is bounded while kv n kX tends to infinity. Thus, the injection
Assuming that X 00 is
closed in X ffi when equipped with the weaker norm, the closed graph theorem would
lead to a contradiction. In fact, the boundedness of ' would imply the closedness of
the graph and hence the boundedness of ' \Gamma1 .
Remark 2.2 X 00 endowed with the norm k \Delta kX is a Hilbert space, and one has the
continuous embeddings
0;D is the closure with respect to k \Delta k
1;\Omega of all patch-wise smooth
globally continuous functions
on\Omega which vanish on D, the first inclusion is clear.
By the definition of X 00 we have for z 2
If fv n g is a Cauchy sequence in X 00 , it is also a Cauchy sequence in X ffi and converges
to some v 2 X ffi . From (2.11) it follows that f[v n is a Cauchy sequence in
converges to some g 2 H 1=2
00 (\Gamma kl ). By standard arguments we conclude
that
As a direct consequence we have
We now turn the problem (1.1) into a weak form based on the above characterization
of H 1
Z\Omega
Setting
Y
we consider the variational problem: find (u; -) 2 X 00 \Theta M such that
From (2.11) it follows that the operator
defined
by (Bv; -)
for any - 2 M , is bounded.
Moreover, the saddle point problem (2.14) satisfies the inf-sup condition. Indeed,
since the dual of M is M
sup
Now pick w kl 2 H 1=2
its extension to S by zero as w 0
kl . By definition of H 1=2
we have w 0
Hence assigning each \Gamma kl ae S to exactly one
(whose boundary contains \Gamma kl ), the local problems
kl
have unique solutions in H 1
defined by
z kl is easily
seen to belong to X 00 while, by construction
Since kz kl k
(see e.g. [12], p. 90) this confirms
the claim on the inf-sup condition.
Furthermore, we know from (2.12) that H 1
kvk
0;D , the bilinear form a(\Delta; \Delta) is V -elliptic.
3 The Discrete Problem
We will now turn to a conforming finite element discretization of (2.14). Throughout
the remainder of this paper we will restrict the discussion to the bivariate case
2. For each
subdomain\Omega k we choose a family of (conforming) triangulations
independently of the neighboring subdomains, i.e., the nodes in T k;h that belong
to \Gamma kl need not match with the nodes of T l;h . The corresponding spaces of piecewise
linear finite elements on T k;h are denoted by S h (T k;h ). We set
Y
i.e., the functions in X h are continuous at the cross-points of the polygonal subdo-
mains\Omega k . We associate with each interface \Gamma kl the non-mortar-side which by the
usual convention
while\Omega l is the mortar-side. Let T kl;h be the space of all continuous
piecewise linear functions on \Gamma kl on the partition induced by the triangulation
T k;h on the non-mortar-side, under the additional constraint that the elements in
T kl;h are constant on the two intervals containing the end points of \Gamma kl . Thus the dimension
of T kl;h agrees with the dimension of ~
The space of discrete multipliers is defined as
Y
Furthermore we denote the kernel of the restriction operator as
For simplicity of our notation, we assume that each \Gamma kl corresponds to one edge
of the polygonal
domain\Omega k . By including additional cross-points \Gamma kl can be divided
into parts fl i , where the mortar side of each fl i can be k or l, cf. [6]. Since this
extension does not change the analysis we will not further burden our notation with
such distinctions.
For convenience, we have labelled the finite element spaces by a global mesh
size parameter h. On the other hand, since the mortar method aims at combining
different possibly independent discretizations, we will admit different individual
discretization parameters on the subdomains. Whenever this is to be stressed, we
will denote by h k the mesh size
on\Omega k . This is actually related to the choice of the
mortar-side. In principle, one may choose the Lagrange multipliers on each part of
the skeleton from either adjacent subdomain. It will turn out, however, that the
side with the larger mesh size is the right choice if the corresponding adjacent mesh
sizes differ very much. Nevertheless, in order to avoid a severe cluttering of indices
and to keep the essence of the reasoning as transparent as possible, we will generally
suppress an explicit distinction of local mesh sizes. Instead we will always tacitly
assume the following convention.
Hypothesis (M). If each \Gamma kl ae S is labelled such
l is the mortar side, then
this choice of the mortar side has been made such that
ch k
holds with c being a constant of moderate size.
Thus the general convention will be that whenever h appears in a summand
related to \Gamma kl it is to be understood as h k , the larger of the two adjacent mesh sizes,
while otherwise h stands for the globally maximal mesh size. It will be seen that
with this choice the stability of the discretization holds uniformly for arbitrarily
varying mesh sizes.
The main handicap of all attempts to analyze the mortar element method is the
fact that we have
1;\Omega l
recall Remark 2.1, c.f. also Lemma 3.5 below. Therefore, following [20], we will use
mesh-dependent norms. Setting
let
\Gamma1=2;h :=
Whenever a distinction of local mesh sizes matters, the global h in (3.5)-(3.6) has
to be replaced by h k in the summands for \Gamma kl , i.e. in agreement with Hypothesis
(M) by the larger value of the mesh size of the neighboring subdomains.
Obviously, we have in analogy to (2.11), also by definition
In this framework we will prove that
is a stable discretization of (2.14).
In order to simplify the treatment of the trace spaces, let
be a partition of the interval [- which represents an interface \Gamma kl . We always assume
that such partitions are quasi-uniform since inverse estimates will be frequently
used. Motivated by the setting (3.2) of ~
T kl;h and T kl;h we consider two subspaces of
the space of continuous piecewise linear functions on [- h be the subspace
of those functions that vanish at the endpoints - 0 and - p , and let T h be the subspace
of those functions that are constant on the first and on the last interval. So S h and
have the same dimension p \Gamma 1.
For convenience, we suppress the explicit reference to the interval [-
there is no risk of confusion. In particular, the standard inner product on
will be denoted by (\Delta; \Delta) 0 , and the associated L 2 -norm by k \Delta k 0 .
Lemma 3.1 The projectors defined by
are uniformly bounded in L 2 , specifically
kfk
Proof: Since we are dealing with the 2-dimensional case, the proof is easy. For
be defined by v h (- i
and v h agree on [-
Z
Z
On the other hand, one obtains for the first (and last) interval
Z
Z
Z
where D := (-
Z
Z
Summing over all intervals and using Young's inequality yields
which proves (3.11).
Since the subsequent discussion involves also the special properties of the spaces
00 we briefly recall the following interpolation argument which, in principle, is
standard, see e.g. Proposition 2.5 in [16]. Suppose that Y; X are Hilbert spaces that
are continuously embedded in the Hilbert spaces Y; X , respectively, and that L is
a bounded linear operator from X to Y and from X to Y with norms C and C,
respectively. Then L is also a bounded linear operator from the interpolation space
bounded by
As a first application we state the following inverse estimate
which is perhaps only worth mentioning since a somewhat stronger norm than the
usual 1=2-norm appears on the left hand side. In fact, the standard inverse estimate
in H 1 ensures that kv h
and, by [14],
(1.4.2.1), p. 24, one has kv h k H 1= kv h k 1 one can take combined with (3.13) confirms (3.14).
We are now in a position to formulate the main result of this section.
Theorem 3.2 Assume that the triangulation in each
subdomain\Omega k is uniform and
that Hypothesis (M) is satisfied. The discretizations (3.8) based on the spaces X
defined by (3.1) and (3.2), respectively, satisfy the LBB-condition, i.e., there exists
some fi ? 0 such that
sup
holds uniformly in h.
Proof: The first part of the proof follows the lines of [20], where the LBB-condition
is proved for X h with a different norm. Given - 2 M h and \Gamma kl ae S, we define d kl
on the boundary of the subdomain on the non-mortar-side by
d kl :=
Here Q kl refers to the projector from the previous lemma when applied to L 2 (\Gamma kl ).
Since the triangulation
on\Omega k is assumed to be uniform, by Lemma 5.1 in [7], there
is an extension G k;h (d kl
Moreover, kd kl k
. Now we set
G k;h (d kl ) for x 2
From (3.12) in the proof of Lemma 3.1 we know that
Z
Z
-d kl ds - 3k-k 0;\Gamma kl kd kl k
On the other hand, the inverse inequality (3.14) yields
ch \Gamma1=2 kd kl k 0;\Gamma kl ; (3.20)
and it follows from (3.18) and (3.17) that kv kl k
ch \Gamma1=2 kd kl k 0;\Gamma kl .
we conclude, on account of (3.19) and (3.6), on one
hand, that
R
-[v kl ]ds - c k-k \Gamma1=2;h;\Gamma kl
and, on the other hand, also that
R
-[v kl ]ds - c k-k \Gamma1=2;h;\Gamma kl
which in summary yields
Z
-[v kl ]ds - ck-k \Gamma1=2;h;\Gamma kl
Now the assertion is obtained from (3.21) by summing over all \Gamma kl ae S.
Note that v kl in the preceding proof is constructed with respect to the non-
mortar-side. Therefore the larger mesh size enters into (3.20), and varying mesh
sizes are no problem in the proof.
We need two more approximation properties.
Remark 3.3 Let I h denote the Lagrange interpolation operator onto X h . Then we
have
ch 3=2 kvk
Proof: Let T be an element
in\Omega m and fl be one of its edges. By the trace theorem,
the mapping H continuous. By using the Bramble-
Hilbert lemma and the standard scaling argument we obtain
ch 3 jvj 2
By summing over all edges fl which lie on \Gamma kl , we obtain the assertion.
Remark 3.4 Given - 2 H 1=2 , there is a - h 2 T h such that
ch 1=2 k-k
Proof: To verify (3.23) consider the Lagrange interpolant L h at the interior points
to the intervals [- constants.
Since constants are reproduced, by the same arguments as those used in the proof
of the preceding remark one confirms that k- \Gamma L h -k 0 - chj-j 1 . Moreover, the L 2
projector
while also kP h . Thus, since H
applying (3.13) with
It remains to prove the ellipticity of a on the kernel V h of the operator B. In our
first approach we had a stabilizing term in the bilinear form to compensate (3.4).
This is not necessary. Recently, C. Bernardi informed us about some techniques
used now in [1]. When adapting it for obtaining a good alternative of (3.4), we
observed the analogy to Lemma 2.2 in [20]. For the reader's convenience we present
it with a short proof.
Lemma 3.5 Assume that the triangulations in each
subdomain\Omega k are shape regu-
lar. Then
1;\Omega l
kl;h be the L 2 -projector. If v h 2 V h , then by the
matching condition
R
the
jumps of H 1 -functions vanish in the sense of (2.12), we have P kl
From this, Remark 3.4, and the trace theorem we conclude that
)vj\Omega l
ch 1=2
kvj\Omega l
ch 1=2 (kvk
1;\Omega l
Finally, we divide by h 1=2 , and the proof is complete.
A direct consequence is obtained by the Cauchy-Schwarz inequality:
We note that we need the ellipticity of a h merely on V h and not on the larger set
for details cf. Lemma III.1.2 in [8]. First, the inequality
has been often used in the analysis of mortar elements. It has been proven in [6]
by a compactness argument as often found in proofs of non-standard inequalities of
Poincar'e-Friedrichs type. Moreover, from (3.25) and (3.26) it follows that
for
so that, in view of the definition (3.5), the proof of the ellipticity is complete.
Note again that the above reasoning remains valid under Hypothesis (M) concerning
the choice of the mortar side if the local mesh sizes vary significantly.
To keep the development as transparent as possible we have dispensed with
striving for utmost generality. An extension of the majority of the results to finite
element spaces with polynomials of higher degree is rather straightforward. Only a
comment on the corresponding version of Lemma 3.1 is worth mentioning. The rest
of this section is devoted to the adaptation of the lemma and may be skipped by
the reader.
Remark 3.6 Suppose that the family S h (T k;h ) in (3.8) consists of Lagrange finite
elements of degree n. One has to consider analogous spaces S h and T h consisting
of globally continuous piecewise polynomials of degree at most n with respect to the
partition (3.9). While the elements of S h vanish at the endpoints, it is consistent
with the usual setting, cf. [4], to require that the elements of T h have degree at most
on the first and last interval in order to ensure matching dimensions. The
essence of the proof for an analog to Lemma 3.1 is to show that
sup
(v
for some constant C independent of h.
So given any u h 2 S h we have to construct v h 2 T h such that (3.27) holds. As in
the proof of Lemma 3.1 we can choose v h to coincide with u h on the subset [-
so that it remains to analyse the first and last interval. By symmetry it suffices
to consider the first interval which for convenience is taken to be [0; 1]. Setting
h dx, or equivalently thatZv h
where z n is the largest zero of the n-th degree Legendre polynomial. Since v h has
degree the Gaussian quadrature formulae on [0; 1] are exact for the polynomials
Here the x k := (1 are the zeros of the n-th degree Legendre polynomial with
respect to the interval [0; 1] and the ae k 's are the weights. Since
and the latter estimate is sharp
because v h can be chosen to vanish at x Together with the inequality x 2 - x
we
This proves (3.27) with C := The remainder of the proof of Lemma
3.1 is now analogous when 3=8 is replaced by C.
A crucial ingredient of the convergence analysis of the multigrid algorithm is an
error estimate of the finite element solution. Although the existence of such an
estimate was mentioned in [4], no proof was provided there. Therefore, we will give
a proof here.
Up to now we have considered the mortar element method as a mixed method,
but one may interpret it as a non-conforming method with finite elements in
We will establish the error estimates in the framework of non-conforming elements.
On the other hand, we will do this by making use of the results for the mixed method
from the preceding section. We note that by Lemma 3.5 the norms k \Delta k 1;h and k \Delta k 1;ffi
are equivalent on V h .
In this context it is natural to assume H 2 -regularity, i.e.
Theorem 4.1 Suppose that the triangulations T h;k are shape regular and that the
variational problem (1.1) is H 2 -regular. Then the finite element solution u
satisfies
where h in the maximum of the mesh sizes of the triangulations.
Proof: By Strang's second lemma, see e.g. [8], p. 102 we have
R
a @u
@n [v h ]ds
Indeed, following [8], p. 104, integration by parts provides the following representation
of the consistency error in (4.2)
0;@\Omega
Z
Z
Z
@\Omega g(x)v h (x)ds
Z
Z
@n
Z
Z
@\Omega g(x)v h (x)ds
(a @u
@n
@n
Note that it is clear from (4.3) that the Lagrange multiplier - in (2.14) coincides
with a @u
@n . Since v h 2 V h , orthogonality (3.3) allows us to subtract an arbitrary
element h from the first factor so that
(a @u
@n
ka
@n
From the trace theorem we know that ka @u
@n
. Now
Remark 3.4 and Lemma 3.5 followed by the application of the Cauchy-Schwarz
inequality to the sum provide the estimate
jL u (v h )j - ch kuk
and the quotient in (4.2) is bounded by ch kuk
2;\Omega .
In order to establish a bound of the approximation error in (4.2), we start with
the approximation of u on each
2(\Omega\Gamma0 we can take
the interpolant in S h (T k;h ). In particular the cross-points may be chosen as nodal
points and we obtain interpolants which are continuous at the endpoints of each
. The well-known results for conforming P 1 -elements yield an estimate for the
approximation in the (larger) space
ch kuk
By Remark 3.3 and the triangle inequality, we obtain for \Gamma kl ae S:
ch 3=2 (kuk
2;\Omega l
Hence,
2;\Omega with I h u 2 X h . We do not say that I h u is contained
in V h . On the other hand, from the inf-sup condition in Theorem 3.2 above and
Remark III.4.6 in [8] we conclude that the estimate in X h yields an upper bound for
the approximation in the kernel V h ,
In particular, the general arguments above cover also the case with mesh-dependent
norms. Combining (4.5) and (4.6) yields
ch kuk
and the bound of the second term in the desired estimate (4.1) has been established.
To obtain the L 2 error estimate we move completely to the theory of nonconforming
elements. Here (4.5) and (4.7) provide the ingredients for the duality argument
in Lemma III.1.4 from [8] which yields (4.1) and completes the proof. The details
are obvious from the treatment of the Crouzeix-Raviart element in [8], p.106.
For completeness we mention that the error estimate in the energy norm can be
improved if the individual mesh sizes of the subdomains are incorporated. On the
other hand, the duality argument yields only a factor of if based on
the regularity estimate kuk
For an error estimate of the Lagrange multiplier the reader is referred to [20].
5 Multigrid Convergence Analysis
The saddle point problem (3.8) gives rise to a linear system of the form
!/
where the dimension of the vectors coincides with the dimension of the finite element
spaces X h and M h , respectively. For convenience, the same symbol is taken for
the finite element functions and their vector representations, and the index h is
suppressed whenever no confusion is possible.
We will always assume that the finite element basis functions are normalized
such that the Euclidean norm of the vectors k \Delta k ' 2
is equivalent to the L 2 -norm of
the functions, i.e.
When the equations (5.1) are to be solved by a multigrid algorithm, the design of
the smoothing procedure is the crucial point. Motivated by [10] (see also [9]) our
smoothing procedure will be based on the following concept. Suppose that C is a
preconditioner for A which, in particular, is normalized so that
and for which the linear system
e
is more easily solvable. In actual computations the vectors v; - are obtained by
implementing
where
is the Schur complement of (5.4).
In particular, we can divide the vector v into two blocks with the first block
associated to the nodal values in the interior of the subdomains and the second
block associated to the values on the skeleton. A non-diagonal preconditioner is
chosen only for the first blockB @
-C A =B @
Here, the system splits into two parts, and the Schur complement matrix
ff
Thas a simple structure. The dimension corresponds to the number of nodal points
from the non-mortar sides on the skeleton. Moreover, it consists of bands which are
coupled only through the points next to the cross-points. As a consequence, all the
other points can be eliminated with a very small fill-in.
The block matrix C 1 may be chosen as the ILU-decomposition of the corresponding
block of the given matrix A.
In order to facilitate the analysis we assume that also C 1 is a multiple of the
identity. Moreover we return to the original structure as in (5.1). Then the iteration
that will serve as a smoother in our multigrid scheme has the form
!/
where superscripts will always denote iteration indices. It is important to note that
h always satisfies the constraint, i.e.,
see [10]. Moreover (5.9) shows that the next iterate is independent of the old Lagrange
multiplier
h .
The coarse grid correction of the multigrid scheme can be performed in the
standard manner since the finite element spaces X h ae X 00 and M h ae M are nested,
see e.g. [10] or [15], p. 235. The smoothing property (5.12) below even shows that
we can abandon the transfer step proposed in [10].
As usually, the analysis of the multigrid method will be based on two different
norms. The fine topology will be measured by the norm
and the coarse one by
We recall that -
Smoothing property: Assume that - smoothing
steps of the relaxation (5.9) with C := ffI are performed, then
ch \Gamma2
Approximation property: For the coarse grid correction u 2h one has
ch
Proof of the smoothing property: We note that (5.12) is stronger than the
original version of the property in [10]. There we could estimate the jjj \Delta jjj 2 -norm
only if the Lagrange multiplier - m
h was replaced by a suitable one which could be
determined by solving once more an equation with the matrix (5.7). R. Stevenson
[17] observed that the extra step yields the same Lagrange multiplier as another
smoothing step. So the following proof is based on Stevenson's idea. As usual, only
purely algebraic properties are used.
From (5.9) we have
We may extract a recursion for the u component
with the projector
The matrix M := P
ff A)P is symmetric. Since u 1 satisfies the constraint
Since the recursion (5.14) is linear, we may assume without loss of generality that
which implies multiply (5.14) by the matrix in order to
eliminate the inverse matrix and consider the first component. Then we obtain with
or
ff
From (5.16) it follows that for m - 3
ff
ff
Note the symmetry of the matrix on the right-hand side of (5.18). Since ff -
(A), the matrix I \Gamma 1
ff A is a contraction for the ' 2 -norm. Moreover the spectrum
of M is contained in [0,1], and we conclude by the usual spectral decomposition
argument
We have assumed ff - c h \Gamma2 and obtain now from (5.18)
ch \Gamma2
The in the denominator may be replaced by m if we adapt the factor
c. This proves (5.12) for m - 3. The cases can be treated by
recalling (5.17) and
Proof of the approximation property: The proof of the approximation property
depends on the individual elliptic problem. Here it is convenient to consider
the mortar elements as nonconforming elements, and we follow the treatment of
the multigrid algorithm for the nonconforming P 1 -element in [11]. For a current
approximation the residual
is given by
We recall from (5.2) that the Riesz-Fischer representation r 2 X h ae L
2(\Omega\Gamma of d
defined by
satisfies
Let now z be the solution of the auxiliary variational problem
Since the variational problem on the
domain\Omega is assumed to be H 2 -regular, we have
are the finite element approximations for
the corresponding discretizations
respectively. The latter is true since X 2h ae X h and thus
(5.21) holds for . At this point the L 2 -estimate established in Theorem 4.1
comes into play. It is applied to the auxiliary problem (5.23) providing
0;\Omega
ch 2 kzk
ch 2 krk
ch 2 kdk ' 2
ch
which completes the proof of the approximation property.
After having established a smoothing property and the associated approximation
property it is clear from the standard multigrid theory that the W-cycles yield an
h-independent convergence rate if sufficiently many smoothing steps are chosen, see
e.g. [15], Chapter 7 or [8], pp. 222-228.
In most practical realizations a better performance is observed for multigrid
iterations with V-cycles and only a few smoothing steps. It is a common experience
that multigrid methods converge in more general situations than those assumed in
theoretical proofs. Furthermore, the bounds which are obtained for the convergence
rate strongly depend on the constants appearing in (5.12) and (5.13). Therefore a
realistic quantitative appraisal of the proposed scheme has to be based in addition
on complementary numerical experiments. Below we will report on our numerical
investigations for various typical situations of practical interest including also cases
that are not covered by our theoretical considerations.
6 Numerical Experiments
We present numerical examples implemented in the software toolbox UG [3] and its
finite element library. The efficiency of the method will first be studied in detail
for a model situation which is consistent with the assumptions of the convergence
proof. In addition, we consider the robustness of the multigrid solver for mortar
finite element problems which are not covered by our analysis.
In all examples we use a multigrid method with a V-cycle and the simple
smoother (5.9), where is the Jacobi smoother for the local problems in
the
subdomains\Omega k . We want to point out that all presented convergence rates are
asymptotic rates.
Although the Schur complement of the smoothing matrix (5.7) is small and
could be assembled without loosing the optimal complexity of the algorithm, in our
implementation the equation (5.5) is solved iteratively. Specifically, a few cg-steps,
preconditioned with a symmetric Gau-Seidel relaxation for S in (5.7) are performed.
The implementation of the Gau-Seidel procedure makes use of the graph structure
of the matrices A and B. Thus, it has complexity O(dimM h ).
Since the bilinear form b contains no differential operator, the condition number
of S is bounded independently of the mesh size h, and a bounded number of steps
for the inner iterations independent of the refinement level is sufficient. For the
numerical experiments, we prescribe an error reduction factor 0:1 - ae - 0:5 for the
approximate solution of (5.5).
In the first test series, we investigate the multigrid convergence for the model
problem
\Gammadiv a grad
in a quadrilateral
domain\Omega := (0; 1) 2 which is split into four
;\Omega 11 .
The diffusion coefficients are assumed to be constant in every subdomain and are
denoted as a 00 Figure 1).
We start with a test for the Laplace operator, i.e. a
on a regular grid (cf. Table 1). We compare the asymptotic convergence rates
of the linear multigrid method with V(1,1)-cycle for the mortar problem and the
regular grid distorted grid different step sizes
Figure
1: Grids with one cross-point
simplified problem Au additional constraints. The latter
corresponds to the decoupled boundary value problems where in every subdomain
the equation is solved with Neumann boundary conditions for the inner boundary
without requiring continuity. For the mortar problem, we use the smoother (5.9)
with an inner reduction factor ae = 0:5 and ae = 0:1, for the simplified problem we
use the Jacobi smoother diag(A). Our results on a regular grid show that the same
rates are obtained with the mortar coupling, even for only one Schur complement
iteration. The fifth column in the table refers to the configuration with a
shows that the convergence is independent of the variation
of the diffusion constants in the different subdomains for a regular grid.
level elements different diff. const. without mortar elem.
nb. of inner iter. 1 1-2 1-2
Table
1: Regular grid: asymptotic convergence rates for a linear multigrid iteration
with a V(1,1)-cycle and resulting number of inner iterations
Note that due to (5.10) the multigrid iterates are contained in V h and that the
problem (3.8) is elliptic in V h . Thus, the multigrid iteration can be accelerated by
embedding it into a cg-iteration, i.e., a cg-iteration preconditioned by a multigrid
V-cycle is performed. There is another advantage. It is often difficult to find the
optimal damping factor ff. In particular, when the discretization with the slightly
distorted grid in Figure 1 is used, the Jacobi smoother diag(A) is not convergent
without appropriate damping. Thus, the cg-method is used here for computing
the correct damping factors automatically. However, since Equation (5.5) is solved
only approximately in actual computations, the multigrid iterates are not exactly
contained in V h . In other contexts the cg-method may be very sensitive with respect
to this point, but in our tests it has turned out that 3 inner iteration steps are
sufficient in all cases (cf. the entries in the last row of Tables 1-3).
The case in which the step sizes depend strongly on the
subdomain\Omega k was next
investigated. We observe better convergence if the Lagrange parameter on each
\Gamma kl is associated to the side of \Gamma kl with the coarser mesh. This is consistent with
Hypothesis (M). Although convergence is observed for a V(1,1)-cycle in all cases,
the asymptotic rate deteriorates for more than 100000 elements in the case of the
distorted grids and large jumps of the mesh sizes. On the other hand, the V(2,2)-
cycle turns out to be a robust preconditioner for the cg-iteration also in extreme
cases.
level elements regular grid distorted grid elements various step sizes
nb. of inner iter. 1-3 1-3 1-2
Table
2: Irregular grids: asymptotic convergence rates for cg with V(2,2)-cycle
In the next example we consider a typical mortar situation with several cross-
points. In Figure 2 large bricks are separated by thin channels. Fixing the diffusion
constant for the bricks to a we test the cases where the channels have higher
or lower permeability (a We perform the cg-method
with V(1,1)-cycle and two inner iterations. We obtain stable convergence rates if
the mortar side is on the side with the smaller diffusion constant and large step size,
resp.; otherwise the method may fail. The results in Figure 2 for the case a
show clearly that the diffusion is faster in the small channels.
level elements a
number of inner iterations 1-2 1-2 1-2
Table
3: Convergence for the example with several cross-points for cg with V(1,1)-
cycle
Finally, we apply the multigrid method to an example for a rotating geometry
with two circles which occurs for time dependent problems (cf. [18]). We use a cg-
iteration with a damped Jacobi smoothing diag A. Subdomains with curved
Figure
2: Example with several cross-points
boundaries are not covered by our theory since the approximation of the curved
boundaries induces an additional consistency error. Note that the exact Lagrange
parameter is piecewise constant and discontinuous for a linear solution. Thus, linear
functions cannot be represented by the mortar ansatz space. This results in worse
convergence rates. Nevertheless, the method is stable when a cg-iteration is applied,
preconditioned by a V(3,3)-cycle and 3 inner iterations for the Schur complement
equation (cf. Table 4). On the other hand, without cg-acceleration a V(4,4)-cycle
and a strong damping for the Jacobi smoother is required.
level elements convergence rate
9 2097152 0.37
Table
4: Rotating geometry (parallel computation on 128 processors)
In summary, we have demonstrated the robustness of the method with respect to
the number of subdomains, different step sizes in the subdomains, and varying diffusion
constants. Thus, this is a very efficient solver for mortar finite elements. The
convergence rates are independent of the mesh size and the number of refinement
levels. The results show clearly that the presented smoother with inexact solution of
the corresponding Schur complement is very efficient and that no further improvement
is expected from a more accurate solution of the Schur complement equation.
In practice, for a nested multigrid cycle the accuracy of the approximation error is
obtained within one or two V(1,1)-cycles.
Of course, our tests concern the robustness with respect to different mortar
situations, whereas the equations on the subdomains are simple. For more involved
problems the smoother C has to be replaced, e.g. by a more robust ILU smoother.
Nevertheless, smoothers which are decoupled from the mortar interfaces as in (5.8)
are recommended in order to retain the low complexity.
Our numerical experiments confirm that the quality of the solver depends strongly
on the right choice of the mortar side. In extreme cases the method diverges if the
Lagrange parameter is associated with the wrong side. Apparently a good rule of
thumb is to choose the Lagrange parameter for that side for which the quotient a=h 2
attains the smaller value.
--R
A class of iterative methods for solving saddle point problems
The mortar finite element method with Lagrange multipliers
The mortar element method for three dimensional finite elements.
"Nonlinear Partial Differential Equations and Their Applications"
Iterative methods for the solution of elliptic problems in regions partitionned in substructures
A Cascade algorithm for the Stokes equation
An efficient smoother for the Stokes problem.
Mixed and Hybrid Finite Element Methods
Approximation by finite element functions using local regulariza
Elliptic Problems in Nonsmooth Domains
The coupling of mixed and conforming finite element discretizations
On the convergence of multigrid methods with transforming smoothers.
Hierarchical a posteriori error estimators for mortar finite element methods with Lagrange multipliers.
--TR
--CTR
Micol Pennacchio, The Mortar Finite Element Method for the Cardiac Bidomain Model of Extracellular Potential, Journal of Scientific Computing, v.20 n.2, p.191-210, April 2004
V. John , P. Knobloch , G. Matthies , L. Tobiska, Non-nested multi-level solvers for finite element discretisations of mixed problems, Computing, v.68 n.4, p.313-341, September 2002
Analysis of mortar-type Q1rot/Q0 element and multigrid methods for the incompressible Stokes problem, Applied Numerical Mathematics, v.57 n.5-7, p.562-576, May, 2007
O. Steinbach, A natural domain decomposition method with non-matching grids, Applied Numerical Mathematics, v.54 n.3-4, p.362-377, August 2005
D. Braess , P. Deuflhard , K. Lipnikov, A subspace cascadic multigrid method for mortar elements, Computing, v.69 n.3, p.205-225, Nov. 2002 | saddle point problems;domain decomposition;mortar method;trace spaces |
339364 | A Theory of Single-Viewpoint Catadioptric Image Formation. | Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. One important design goal for catadioptric sensors is choosing the shapes of the mirrors in a way that ensures that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed images. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint. We describe all of the solutions in detail, including the degenerate ones, with reference to many of the catadioptric systems that have been proposed in the literature. In addition, we derive a simple expression for the spatial resolution of a catadioptric sensor in terms of the resolution of the cameras used to construct it. Moreover, we include detailed analysis of the defocus blur caused by the use of a curved mirror in a catadioptric sensor. | Introduction
Many applications in computational vision require that a large field of view is imaged.
Examples include surveillance, teleconferencing, and model acquisition for virtual reality.
A number of other applications, such as ego-motion estimation and tracking, would also
benefit from enhanced fields of view. Unfortunately, conventional imaging systems are
severely limited in their fields of view. Both researchers and practitioners have therefore
had to resort to using either multiple or rotating cameras in order to image the entire scene.
One effective way to enhance the field of view is to use mirrors in conjunction
with lenses. See, for example, [Rees, 1970], [Charles et al., 1987], [Nayar, 1988], [Yagi
and Kawato, 1990], [Hong, 1991], [Goshtasby and Gruver, 1993], [Yamazawa et al., 1993],
[Bogner, 1995], [Nalwa, 1996], [Nayar, 1997a] , and [Chahl and Srinivassan, 1997]. We refer
to the approach of using mirrors in combination with conventional imaging systems as catadioptric
image formation. Dioptrics is the science of refracting elements (lenses) whereas
catoptrics is the science of reflecting surfaces (mirrors) [Hecht and Zajac, 1974]. The combination
of refracting and reflecting elements is therefore referred to as catadioptrics.
As noted in [Rees, 1970], [Yamazawa et al., 1995], [Nalwa, 1996], and [Nayar and
Baker, 1997], it is highly desirable that a catadioptric system (or, in fact, any imaging
system) have a single viewpoint (center of projection). The reason a single viewpoint is so
desirable is that it permits the generation of geometrically correct perspective images from
the images captured by the catadioptric cameras. This is possible because, under the single
viewpoint constraint, every pixel in the sensed images measures the irradiance of the light
passing through the viewpoint in one particular direction. Since we know the geometry of
the catadioptric system, we can precompute this direction for each pixel. Therefore, we
can map the irradiance value measured by each pixel onto a plane at any distance from the
viewpoint to form a planar perspective image. These perspective images can subsequently
be processed using the vast array of techniques developed in the field of computational
vision that assume perspective projection. Moreover, if the image is to be presented to
a human, as in [Peri and Nayar, 1997], it needs to be a perspective image so as not to
appear distorted. Naturally, when the catadioptric imaging system is omnidirectional in
its field of view, a single effective viewpoint permits the construction of geometrically correct
panoramic images as well as perspective ones.
In this paper, we take the view that having a single viewpoint is the primary design
goal for the catadioptric sensor and restrict attention to catadioptric sensors with a single
effective viewpoint [Baker and Nayar, 1998]. However, for many applications, such as robot
navigation, having a single viewpoint may not be a strict requirement [Yagi et al., 1994].
In these cases, sensors that do not obey the single viewpoint requirement can also be
used. Then, other design issues become more important, such as spatial resolution, sensor
size, and the ease of mapping between the catadioptric images and the scene [Yamazawa
et al., 1995]. Naturally, it is also possible to investigate these other design issues. For
example, Chahl and Srinivassan recently studied the class of mirror shapes that yield a
linear relationship between the angle of incidence onto the mirror surface and the angle of
reflection into the camera [Chahl and Srinivassan, 1997].
We begin this paper in Section 2 by deriving the entire class of catadioptric systems
with a single effective viewpoint, and which can be constructed using just a single
conventional lens and a single mirror. As we will show, the 2-parameter family of mirrors
that can be used is exactly the class of rotated (swept) conic sections. Within this class
of solutions, several swept conics are degenerate solutions that cannot, in fact, be used to
construct sensors with a single effective viewpoint. Many of these solutions have, however,
been used to construct wide field of view sensors with non-constant viewpoints. For these
mirror shapes, we derive the loci of the viewpoint. Some, but not all, of the non-degenerate
solutions have been used in sensors proposed in the literature. In these cases, we mention
all of the designs that we are aware of. A different, coordinate free, derivation of the fact
that only swept conic sections yield a single effective viewpoint was recently suggested by
Drucker and Locke [1996].
A very important property of a sensor that images a large field of view is its reso-
lution. The resolution of a catadioptric sensor is not, in general, the same as that of any
of the sensors used to construct it. In Section 3, we study why this is the case, and derive
a simple expression for the relationship between the resolution of a conventional imaging
system and the resolution of a derived single-viewpoint catadioptric sensor. We specialize
this result to the mirror shapes derived in the previous section. This expression should be
carefully considered when constructing a catadioptric imaging system in order to ensure
that the final sensor has sufficient resolution. Another use of the relationship is to design
conventional sensors with non-uniform resolution, which when used in an appropriate
catadioptric system have a specified (e.g. uniform) resolution.
Another optical property which is affected by the use of a catadioptric system is
focusing. It is well known that a curved mirror increases image blur [Hecht and Zajac,
1974]. In Section 4, we analyze this effect for catadioptric sensors. Two factors combine to
cause additional blur in catadioptric systems: (1) the finite size of the lens aperture, and
(2) the curvature of the mirror. We first analyze how the interaction of these two factors
causes defocus blur and then present numerical results for three different mirror shapes: the
hyperboloid, the ellipsoid, and the paraboloid. The results show that the focal setting of a
catadioptric sensor using a curved mirror may be substantially different from that needed in
a conventional sensor. Moreover, even for a scene of constant depth, significantly different
focal settings may be needed for different points in the scene. This effect, known as field
curvature, can be partially corrected using additional lenses [Hecht and Zajac, 1974].
2 The Fixed Viewpoint Constraint
The fixed viewpoint constraint is the requirement that a catadioptric sensor only measure
the intensity of light passing through a single point in 3-D space. The direction of the light
passing through this point may vary, but that is all. In other words, the catadioptric sensor
must sample the 5-D plenoptic function [Adelson and Bergen, 1991] [Gortler et al., 1996]
at a single point in 3-D space. The fixed 3-D point at which a catadioptric sensor samples
the plenoptic function is known as the effective viewpoint.
Suppose we use a single conventional camera as the only sensing element and a
single mirror as the only reflecting surface. If the camera is an ideal perspective camera
and we ignore defocus blur, it can be modeled by the point through which the perspective
projection is performed; i.e. the effective pinhole. Then, the fixed viewpoint constraint
requires that each ray of light passing through the effective pinhole of the camera (that
was reflected by the mirror) would have passed through the effective viewpoint if it had
not been reflected by the mirror. We now derive this constraint algebraically.
2.1 Derivation of the Fixed Viewpoint Constraint Equation
Without loss of generality we can assume that the effective viewpoint v of the catadioptric
system lies at the origin of a Cartesian coordinate system. Suppose that the effective
pinhole is located at the point p. Then, again without loss of generality, we can assume that
the z-axis - z lies in the direction ~
vp. Moreover, since perspective projection is rotationally
symmetric about any line through p, the mirror can be assumed to be a surface of revolution
about the z-axis - z. Therefore, we work in the 2-D Cartesian frame (v; - r is a unit
vector orthogonal to - z, and try to find the 2-dimensional profile of the mirror
Finally, if the distance from v to p is denoted by the parameter c, we
have
Figure
1 for an illustration 1 of the coordinate frame.
We begin the translation of the fixed viewpoint constraint into symbols by denoting
the angle between an incoming ray from a world point and the r-axis by '. Suppose that
this ray intersects the mirror at the point (z; r). Then, since we assume that it also passes
through the origin we have the relationship:
If we denote the angle between the reflected ray and the (negative) r-axis by ff, we also
tan
r
(2)
since the reflected ray must pass through the pinhole is the angle
between the z-axis and the normal to the mirror at the point (r; z), we have:
dz
Our final geometric relationship is due to the fact that we can assume the mirror to be
specular. This means that the angle of incidence must equal the angle of reflection. So,
if fl is the angle between the reflected ray and the z-axis, we have
Figure 1 for an illustration of this constraint.) Eliminating
fl from these two expressions and rearranging gives:
In
Figure
1 we have drawn the image plane as though it were orthogonal to the z-axis - z indicating
that the optical axis of the camera is (anti) parallel to - z. In fact, the effective viewpoint v and the axis of
symmetry of the mirror profile z(r) need not necessarily lie on the optical axis. Since perspective projection
is rotationally symmetric with respect to any ray that passes through the pinhole p, the camera could be
rotated about p so that the optical axis is not parallel to the z-axis. Moreover, the image plane can be
rotated independently so that it is no longer orthogonal to - z. In this second case, the image plane would
be non-frontal. This does not pose any additional problem since the mapping from a non-frontal image
plane to a frontal image plane is one-to-one.
effective viewpoint, v=(0,0) -
r
effective pinhole, p=(0,c)
c
image plane
z -
world point
image of world point
normal
a
mirror point, (r,z)
z
Figure
1: The geometry used to derive the fixed viewpoint constraint equation. The viewpoint
is located at the origin of a 2-D coordinate frame (v; - and the pinhole of the camera
c) is located at a distance c from v along the z-axis - z. If a ray of light, which was about
to pass through v, is reflected at the mirror point (r; z), the angle between the ray of light and - r
is
r
. If the ray is then reflected and passes through the pinhole p, the angle it makes
with - r is
r , and the angle it makes with - z is Finally, if
dr
is the angle between the normal to the mirror at (r; z) and - z, then by the fact that the angle of
incidence equals the angle of reflection, we have the constraint that ff
Then, taking the tangent of both sides and using the standard rules for expanding the
tangent of a sum:
we have:
Substituting from Equations (1), (2), and (3) yields the fixed viewpoint constraint equation:
\Gamma2 dz
dr
dz
dr
which when rearranged is seen to be a quadratic first-order ordinary differential equation:
dz
dr
dr
2.2 General Solution of the Constraint Equation
The first step in the solution of the fixed viewpoint constraint equation is to solve it as a
quadratic to yield an expression for the surface slope:
dz
The next step is to substitute cwhich yields:
dy
Then, we substitute differentiated gives:
2y dy
and so we have:
dr
Rearranging this equation yields:p
dx
r
Integrating both sides with respect to r results in:
where C is the constant of integration. Hence,
constant. By back substituting, rearranging, and simplifying
we arrive at the two equations which comprise the general solution of the fixed viewpoint
constraint equation:
In the first of these two equations, the constant parameter k is constrained by k - 2 (rather
leads to complex solutions.
2.3 Specific Solutions of the Constraint Equation
Together, Equations (16) and (17) define the complete class of mirrors that satisfy the fixed
viewpoint constraint. A quick glance at the form of these equations reveals that the mirror
profiles form a 2-parameter (c and family of conic sections. Hence, the shapes of the
3-D mirrors are all swept conic sections. As we shall see, however, although every conic
section is theoretically a solution of one of the two equations, a number of the solutions are
degenerate and cannot be used to construct real sensors with a single effective viewpoint.
We will describe the solutions in detail in the following order:
Planar Solutions: Equation (16) with
Conical Solutions: Equation (16) with k - 2 and
Spherical Solutions: Equation (17) with k ? 0 and
Ellipsoidal Solutions: Equation (17) with k ? 0 and c ? 0.
Hyperboloidal Solutions: Equation (16) with k ? 2 and c ? 0.
For each solution, we demonstrate whether it is degenerate or not. Some of the
non-degenerate solutions have actually been used in real sensors. For these solutions, we
mention all of the existing designs that we are aware of which use that mirror shape. Several
of the degenerate solutions have also been used to construct sensors with a wide field of
view, but with no fixed viewpoint. In these cases we derive the loci of the viewpoint.
There is one conic section that we have not mentioned: the parabola. Although the
parabola is not a solution of either equation for finite values of c and k, it is a solution of
Equation (16) in the limit that c ! 1,
h, a constant. These limiting
conditions correspond to orthographic projection. We briefly discuss the orthographic case
and the corresponding paraboloid solution in Section 2.4.
2.3.1 Planar Mirrors
In Solution (16), if we set we get the cross-section of a planar mirror:
As shown in Figure 2, this plane is the one which bisects the line segment ~
vp joining the
viewpoint and the pinhole.
ceffective viewpoint,
effective pinhole, p=(0,c)
image plane
z -
world point
image of world point
c
Figure
2: The plane
2 is a solution of the fixed viewpoint constraint equation. Conversely,
it is possible to show that, given a fixed viewpoint and pinhole, the only planar solution is the
perpendicular bisector of the line joining the pinhole to the viewpoint. Hence, for a fixed pinhole,
two different planar mirrors cannot share the same effective viewpoint. For each such plane the
effective viewpoint is the reflection of the pinhole in the plane. This means that it is impossible
to enhance the field of view using a single perspective camera and an arbitrary number of planar
mirrors, while still respecting the fixed viewpoint constraint. If multiple cameras are used then
solutions using multiple planar mirrors are possible [ Nalwa, 1996 ] .
The converse of this result is that for a fixed viewpoint v and pinhole p, there is
only one planar solution of the fixed viewpoint constraint equation. The unique solution is
the perpendicular bisector of the line joining the pinhole to the viewpoint:
To prove this, it is sufficient to consider a fixed pinhole p, a planar mirror with unit normal
- n, and a point q on the mirror. Then, the fact that the plane is a solution of the fixed
viewpoint constraint implies that there is a single effective viewpoint q). To be
more precise, the effective viewpoint is the reflection of the pinhole p in the mirror; i.e. the
single effective viewpoint is:
n: (20)
Since the reflection of a single point in two different planes is always two different points,
the perpendicular bisector is the unique planar solution.
An immediate corollary of this result is that for a single fixed pinhole, no two different
planar mirrors can share the same viewpoint. Unfortunately, a single planar mirror does
not enhance the field of view, since, discounting occlusions, the same camera moved from
p to v and reflected in the mirror would have exactly the same field of view. It follows
that it is impossible to increase the field of view by packing an arbitrary number of planar
mirrors (pointing in different directions) in front of a conventional imaging system, while
still respecting the fixed viewpoint constraint. On the other hand, in applications such
as stereo where multiple viewpoints are a necessary requirement, the multiple views of a
scene can be captured by a single camera using multiple planar mirrors. See, for example,
[Goshtasby and Gruver, 1993], [Inaba et al., 1993], and [Nene and Nayar, 1998].
This brings us to the panoramic camera proposed by Nalwa [1996]. To ensure a
single viewpoint while using multiple planar mirrors, Nalwa [1996] arrived at a design that
uses four separate imaging systems. Four planar mirrors are arranged in a square-based
pyramid, and each of the four cameras is placed above one of the faces of the pyramid.
The effective pinholes of the cameras are moved until the four effective viewpoints (i.e.
the reflections of the pinholes in the mirrors) coincide. The result is a sensor that has a
single effective viewpoint and a panoramic field of view of approximately 360 ffi \Theta 50 ffi . The
panoramic image is of relatively high resolution since it is generated from the four images
captured by the four cameras. This sensor is straightforward to implement, but requires
four of each component: i.e. four cameras, four lenses, and four digitizers. (It is, of course,
possible to use only one digitizer but at a reduced frame rate.)
2.3.2 Conical Mirrors
In Solution (16), if we set we get a conical mirror with circular cross
section:
s
Figure
3 for an illustration of this solution. The angle at the apex of the cone is 2-
where:
This might seem like a reasonable solution, but since the pinhole of the camera must
be at the apex of the cone. This implies that the only rays of light entering the pinhole
from the mirror are the ones which graze the cone and so do not originate from (finite
extent) objects in the world (see Figure 3.) Hence, the cone with the pinhole at the vertex
is a degenerate solution that cannot be used to construct a wide field of view sensor with
a single viewpoint.
In spite of this fact, the cone has been used in wide-angle imaging systems several
times [Yagi and Kawato, 1990] [Yagi and Yachida, 1991] [Bogner, 1995]. In these imple-
effective viewpoint, v=(0,0)
effective pinhole, p=(0,0)
image plane
z -
mirror
world point
imaged world point
Figure
3: The conical mirror is a solution of the fixed viewpoint constraint equation. Since the
pinhole is located at the apex of the cone, this is a degenerate solution that cannot be used to
construct a wide field of view sensor with a single viewpoint. If the pinhole is moved away from
the apex of the cone (along the axis of the cone), the viewpoint is no longer a single point but
rather lies on a circular locus. If 2- is the angle at the apex of the cone, the radius of the circular
locus of the viewpoint is e \Delta cos 2- , where e is the distance of the pinhole from the apex along the
axis of the cone. If - ? the circular locus lies inside (below) the cone, if - ! 60 ffi the circular
locus lies outside (above) the cone, and if the circular locus lies on the cone.
mentations the pinhole is placed some distance from the apex of the cone. It is easy to
show that in such cases the viewpoint is no longer a single point [Nalwa, 1996]. If the
pinhole lies on the axis of the cone at a distance e from the apex of the cone, the locus of
the effective viewpoint is a circle. The radius of the circle is easily seen to be:
the circular locus lies inside (below) the cone, if - ! 60 ffi the circular locus
lies outside (above) the cone, and if the circular locus lies on the cone. In some
applications such as robot navigation, the single viewpoint constraint is not vital. Conical
mirrors can be used to build practical sensors for such applications. See, for example, the
designs in [Yagi et al., 1994] and [Bogner, 1995].
2.3.3 Spherical Mirrors
In Solution (17), if we set we get the spherical mirror:
Like the cone, this is a degenerate solution which cannot be used to construct a wide field of
view sensor with a single viewpoint. Since the viewpoint and pinhole coincide at the center
of the sphere, the observer would see itself and nothing else, as is illustrated in Figure 4.
The sphere has also been used to build wide field of view sensors several times
[Hong, 1991] [Bogner, 1995] [Murphy, 1995]. In these implementations, the pinhole is placed
outside the sphere and so there is no single effective viewpoint. The locus of the effective
viewpoint can be computed in a straightforward manner using a symbolic mathematics
package. Without loss of generality, suppose that the radius of the mirror is 1:0. The
first step is to compute the direction of the ray of light which would be reflected at the
mirror point
pass through the pinhole. This computation is
viewpoint, v
pinhole, p
Figure
4: The spherical mirror satisfies the fixed viewpoint constraint when the pinhole lies at
the center of the sphere. (Since the viewpoint also lies at the center of the sphere.) Like the
conical mirror, the sphere cannot actually be used to construct a wide field of view sensor with a
single viewpoint because the observer can only see itself; rays of light emitted from the center of
the sphere are reflected back at the surface of the sphere directly towards the center of the sphere.
then repeated for the neighboring mirror point (r dz). Next, the intersection of
these two rays is computed, and finally the limit dr ! 0 is taken while constraining dz by
1. The result of performing this derivation is that the locus of the
effective viewpoint is:@
c
as r varies from \Gamma
to
. The locus of the effective viewpoint is plotted for
various values of c in Figure 5. As can be seen, for all values of c the locus lies within
Figure
5: The locus of the effective viewpoint of a circular mirror of radius 1:0 (which is also
shown) plotted for (d). For all values of c, the
locus lies within the mirror and is of comparable size to the mirror.
the mirror and is of comparable size to it. Like multiple planes, spheres have also been
used to construct stereo rigs [Nayar, 1988] [Nene and Nayar, 1998] , but as described before,
multiple viewpoints are a requirement for stereo.
2.3.4 Ellipsoidal Mirrors
In Solution (17), when k ? 0 and c ? 0; we get the ellipsoidal mirror:a 2
e
e
where:
a
s
s
The ellipsoid is the first solution that can actually be used to enhance the field of view of a
camera while retaining a single effective viewpoint. As shown in Figure 6, if the viewpoint
and pinhole are at the foci of the ellipsoid and the mirror is taken to be the section of the
ellipsoid that lies below the viewpoint (i.e. z ! 0), the effective field of view is the entire
upper hemisphere z - 0.
2.3.5 Hyperboloidal Mirrors
In Solution (16), when k ? 2 and c ? 0, we get the hyperboloidal mirror:a 2
where:
a
As seen in Figure 7, the hyperboloid also yields a realizable solution. The curvature of
the mirror and the field of view both increase with k. In the other direction (in the limit
the hyperboloid flattens out to the planar mirror of Section 2.3.1.
p=(0,c)
image plane
z -
world point
image of world point
c
Figure
The ellipsoidal mirror satisfies the fixed viewpoint constraint when the pinhole and
viewpoint are located at the two foci of the ellipsoid. If the ellipsoid is terminated by the horizontal
plane passing through the viewpoint z = 0, the field of view is the entire upper hemisphere z ? 0.
It is also possible to cut the ellipsoid with other planes passing through v, but it appears there is
little to be gained by doing so.
effective pinhole, p=(0,c)
image plane
z -
c
world point
image of world point
Figure
7: The hyperboloidal mirror satisfies the fixed viewpoint constraint when the pinhole and
the viewpoint are located at the two foci of the hyperboloid. This solution does produce the
desired increase in field of view. The curvature of the mirror and hence the field of view increase
with k. In the limit k ! 2, the hyperboloid flattens to the planar mirror of Section 2.3.1.
Rees [1970] appears to have been first to use a hyperboloidal mirror with a perspective
lens to achieve a large field of view camera system with a single viewpoint. Later,
Yamazawa et al. [1993] [1995] also recognized that the hyperboloid is indeed a practical
solution and implemented a sensor designed for autonomous navigation.
2.4 The Orthographic Case: Paraboloidal Mirrors
Although the parabola is not a solution of the fixed viewpoint constraint equation for finite
values of c and k, it is a solution of Equation (16) in the limit that c ! 1,
c
h, a constant. Under these limiting conditions, Equation (16) tends to:
As shown in [Nayar, 1997b] and Figure 8, this limiting case corresponds to orthographic
projection. Moreover, in that setting the paraboloid does yield a practical omnidirectional
sensor with a number of advantageous properties [Nayar, 1997b].
One advantage of using an orthographic camera is that it can make the calibration of
the catadioptric system far easier. Calibration is simpler because, so long as the direction of
orthographic projection remains parallel to the axis of the paraboloid, any size of paraboloid
is a solution. The paraboloid constant and physical size of the mirror therefore do not need
to be determined during calibration. Moreover, the mirror can be translated arbitrarily
and still remain a solution. Implementation of the sensor is therefore also much easier
because the camera does not need to be positioned precisely. By the same token, the fact
that the mirror may be translated arbitrarily can be used to set up simple configurations
where the camera zooms in on part of the paraboloid mirror to achieve higher resolution
(with a reduced field of view), but without the complication of having to compensate for
the additional non-linear distortion caused by the rotation of the camera that would be
needed to achieve the same effect in the perspective case.
focus
image plane
z -
world point
image of world point
direction of
orthographic projection
Figure
8: Under orthographic projection, the only solution is a paraboloid with the effective
viewpoint at the focus of the paraboloid. One advantage of this solution is that the camera
can be translated arbitrarily and remain a solution. This property can greatly simplify sensor
calibration [ Nayar, 1997b ] . The assumption of orthographic projection is not as restrictive a
solution as it may sound since there are simple ways to convert a standard lens and camera from
perspective projection to orthographic projection. See, for example, [ Nayar, 1997b ] .
3 Resolution of a Catadioptric Sensor
In this section, we assume that the conventional camera used in the catadioptric sensor
has a frontal image plane located at a distance u from the pinhole, and that the optical
axis of the camera is aligned with the axis of symmetry of the mirror. See Figure 9 for
an illustration of this scenario. Then, the definition of resolution that we will use is the
following. Consider an infinitesimal area dA on the image plane. If this infinitesimal pixel
images an infinitesimal solid angle d- of the world, the resolution of the sensor as a function
of the point on the image plane at the center of the infinitesimal area dA is:
dA
If / is the angle made between the optical axis and the line joining the pinhole to
the center of the infinitesimal area dA (see Figure 9), the solid angle subtended by the
infinitesimal area dA at the pinhole is:
Therefore, the resolution of the conventional camera is:
dA
Then, the area of the mirror imaged by the infinitesimal area dA is:
d!
cos OE cos 2
dA
where OE is the angle between the normal to the mirror at (r; z) and the line joining the
pinhole to the mirror point (r; z). Since reflection at the mirror is specular, the solid angle
of the world imaged by the catadioptric camera is:
dA
viewpoint, v=(0,0)
f
pinhole, p=(0,c)
c
image plane
focal plane
z -
optical axis
mirror area, dS
normal
pixel area, dA
f
solid angle, dw
solid angle, du
solid angle, dw
y
mirror point, (r,z)
world point
image of world point
Figure
9: The geometry used to derive the spatial resolution of a catadioptric sensor. Assuming
the conventional sensor has a frontal image plane which is located at a distance u from the pinhole
and the optical axis is aligned with the z-axis - z, the spatial resolution of the conventional sensor
is dA
. Therefore the area of the mirror imaged by the infinitesimal image plane area dA
is
dA. So, the solid angle of the world imaged by the infinitesimal area dA on
the image plane is
Hence, the spatial resolution of the catadioptric sensor
is dA
d! since cos 2
Therefore, the resolution of the catadioptric camera is:
dA
dA
But, since:
we have:
dA
dA
Hence, the resolution of the catadioptric camera is the resolution of the conventional camera
used to construct it multiplied by a factor of:
where (r; z) is the point on the mirror being imaged.
The first thing to note from Equation (38) is that for the planar mirror
the resolution of the catadioptric sensor is the same as that of the conventional sensor
used to construct it. This is as expected by symmetry. Secondly, note that the factor in
Equation (39) is the square of the distance from the point (r; z) to the effective viewpoint
divided by the square of the distance to the pinhole
the distance from the viewpoint to (r; z) and d p the distance of (r; z) from the pinhole.
Then, the factor in Equation (39) is d 2
: For the ellipsoid, d
. Therefore, for the ellipsoid the factor is:
which increases as d p decreases and d v increases. For the hyperboloid, d
some constant . Therefore, for the hyperboloid the factor is:
which increases as d p increases and d v increases. So, for both ellipsoids and hyperboloids,
the factor in Equation (39) increases with r. Hence, both hyperboloidal and ellipsoidal
catadioptric sensors constructed with a uniform resolution conventional camera will have
their highest resolution around the periphery, a useful property for certain applications
such as teleconferencing.
3.1 The Orthographic Case
The orthographic case is slightly simpler than the projective case and is illustrated in
Figure
10. Again, we assume that the image plane is frontal; i.e. perpendicular to the
direction of orthographic projection. Then, the resolution of the conventional orthographic
camera is:
dA
where the constant M is the linear magnification of the camera. If the solid angle d!
images the area dS of the mirror and OE is the angle between the mirror normal and the
direction of orthographic projection, we have:
Combining Equations (35), (42), and (43) yields:
dA
d-
d!
For the paraboloid
, the multiplicative factor r 2 simplifies to:
Hence, as for both the ellipsoid and the hyperboloid, the resolution of paraboloid based
catadioptric sensors increases with r, the distance from the center of the mirror.
viewpoint, v=(0,0)
r
image plane
camera boundary
z -
direction of
orthographic projection
mirror area, dS
normal
f
solid angle, du
solid angle, dw
mirror point, (r,z)
world point
pixel area, dA image of world point
Figure
10: The geometry used to derive the spatial resolution of a catadioptric sensor in the orthographic
case. Again, assuming that the image plane is frontal and the conventional orthographic
camera has a linear magnification M , its spatial resolution is dA
. The solid angle d! equals
cos OE \Delta dS, where dS is the area of the mirror imaged and OE is the angle between the mirror normal
and the direction of orthographic projection. Combining this information with Equation (35)
yields the spatial resolution of the orthographic catadioptric sensor as dA
d! .
4 Defocus Blur of a Catadioptric Sensor
In addition to the normal causes present in conventional dioptric systems, such as diffraction
and lens aberrations, two factors combine to cause defocus blur in catadioptric sensors.
They are: (1) the finite size of the lens aperture, and (2) the curvature of the mirror. To
analyze how these two factors cause defocus blur, we first consider a fixed point in the world
and a fixed point in the lens aperture. We then find the point on the mirror which reflects
a ray of light from the world point through that lens point. Next, we compute where on
the image plane this mirror point is imaged. By considering the locus of imaged mirror
points as the lens point varies, we can compute the area of the image plane onto which a
fixed world point is imaged. In Section 4.1, we derive the constraints on the mirror point
at which the light is reflected, and show how it can be projected onto the image plane. In
Section 4.2, we extend the analysis to the orthographic case. Finally, in Section 4.3, we
present numerical results for hyperboloid, ellipsoid, and paraboloid mirrors.
4.1 Analysis of Defocus Blur
To analyze defocus blur, we need to work in 3-D. We use the 3-D cartesian frame (v; -
where v is the location of the effective viewpoint, p is the location of the effective pinhole, - z
is a unit vector in the direction ~
vp, the effective pinhole is located at a distance c from the
effective viewpoint, and the vectors -
x and -
y are orthogonal unit vectors in the plane
As in Section 3, we also assume that the conventional camera used in the catadioptric sensor
has a frontal image plane located at a distance u from the pinhole and that the optical
axis of the camera is aligned with the z-axis. In addition to the previous assumptions, we
assume that the effective pinhole of the lens is located at the center of the lens, and that
the lens has a circular aperture. See Figure 11 for an illustration of this configuration.
viewpoint, v=(0,0,0)
pinhole, p=(0,0,c)
c
image plane, z=c+u
focal plane, z=c
l=(d-cosl,d-sinl,c)
mirror
lens aperture
z
focused plane, z=c-v
world point, w= (x,y,z)
normal, n
blur region
plane, z=0
principal ray
l
Figure
11: The geometry used to analyze the defocus blur. We work in the 3-D cartesian
frame (v; - x; -
x and -
y are orthogonal unit vectors in the plane z = 0. In addition to
the assumptions of Section 3, we also assume that the effective pinhole is located at the center
of the lens and that the lens has a circular aperture. If a ray of light from the world point
kmk
is reflected at the mirror point through the
lens point sin -; c), there are three constraints on must lie on the mirror,
(2) the angle of incidence must equal the angle of reflection, and (3) the normal n to the mirror
at and the two vectors must be coplanar.
Consider a point on the mirror and a point
kmk
in the
world, where l ? kmk. Then, since the hyperboloid mirror satisfies the fixed viewpoint
constraint, a ray of light from w which is reflected by the mirror at m passes directly
through the center of the lens (i.e. the effective pinhole.) This ray of light is known as the
principal ray [Hecht and Zajac, 1974]. Next, suppose a ray of light from the world point w
is reflected at the point on the mirror and then passes through the lens
aperture point In general, this ray of light will not be imaged at
the same point on the image plane as the principal ray. When this happens there is defocus
blur. The locus of the intersection of the incoming rays through l and the image plane as l
varies over the lens aperture is known as the blur region or region of confusion [Hecht and
Zajac, 1974]. For an ideal thin lens in isolation, the blur region is circular and so is often
referred to as the blur circle [Hecht and Zajac, 1974].
If we know the points m 1 and l, we can find the point on the image plane where the
ray of light through these points is imaged. First, the line through m 1 in the direction ~
is extended to intersect the focused plane. By the thin lens law [Hecht and Zajac, 1974]
the focused plane is:
where f is the focal length of the lens and u is the distance from the focal plane to the image
plane. Since all points on the focused plane are perfectly focused, the point of intersection
on the focused plane can be mapped onto the image plane using perspective projection.
Hence, the x and y coordinates of the intersection of the ray through l and the image plane
are the x and y coordinates of:
and the z coordinate is the z coordinate of the image plane c
Given the lens point c) and the world point
kmk
there are three constraints on the point must lie on the mirror
and so (for the hyperboloid) we have:
Secondly, the incident ray (w reflected ray (m and the normal to the mirror
at must lie in the same plane. The normal to the mirror at m 1 lies in the direction:
for the hyperboloid. Hence, the second constraint is:
Finally, the angle of incidence must equal the angle of reflection and so the third constraint
on the point m 1 is:
These three constraints on m 1 are all multivariate polynomials in x 1 , y 1 , and z 1 : Equation
(48) and Equation (50) are both of order 2, and Equation (51) is of order 5. We were
unable to find a closed form solution to these three equations (Equation (51) has 25 terms
in general and so it is probable that none exists) but we did investigate numericals solution.
Before we present the results, we briefly describe the orthographic case.
4.2 Defocus Blur in the Orthographic Case
The orthographic case is slightly different, as is illustrated in Figure 12. One way to convert
a thin lens to produce orthographic projection is to place an aperture at the focal point
behind the lens [Nayar, 1997b] . Then, the only rays of light that reach the image plane are
those that are (approximately) parallel to the optical axis. For the orthographic case, there
viewpoint, v=(0,0,0)
pinhole, p=(0,0,c)
c
image plane, z=c+u
focal plane, z=c
l=(d-cosl,d-sinl,c)
mirror
lens aperture
focused plane, z=c-v
normal, n
blur region
plane, z=0
principal ray
world point, w= (x,y,z)
l
focus, f=(0,0,c+f)
focal aperture
Figure
12: The geometry used to analyze defocus blur in the orthographic case. One way to
create orthographic projection is to add a (circular) aperture at the rear focal point (the one
behind the lens) [ Nayar, 1997b ] . Then, the only rays of light that reach the image plane are
those which are (approximately) parallel to the optical axis. The analysis of defocus blur is then
essentially the same as in the perspective case except that we need to check whether each ray of
light passes through this aperture when computing the blur region.
is therefore only one difference to the analysis. When estimating the blur region, we need
to check that the ray of light actually passes through the (circular) aperture at the rear
focal point. This task is straightforward. The intersection of the ray of light with the rear
focal plane is computed using linear interpolation of the lens point and the point where the
mirror point is imaged on the image plane. It is then checked whether this point lies close
enough to the optical axis.
4.3 Numerical Results
In our numerical experiments we set the distance between the effective viewpoint and the
pinhole to be meter, and the distance from the viewpoint to the world point w to be
meters. For the hyperboloidal and ellipsoidal mirrors, we set the radius of the lens
aperture to be 10 mm. For the paraboloidal mirror, the limiting aperture is the one at the
focal point. We chose the size of this aperture so that it lets through exactly the same rays
of light that the front 10 mm one would for a point 1 meter away on the optical axis. We
assumed the focal length to be 10 cm and therefore set the aperture to be 1 mm. With
these settings, the F-stop for the paraboloidal mirror is 2 \Theta 1=5. The results for
the other two mirrors are independent of the focal length, and hence the F-stop.
To allow the three mirror shapes to be compared on an equal basis, we used values
for k and h that correspond to the same mirror radii. The radius of the mirror is taken
to be the radius of the mirror cut off by the plane z = 0; i.e. the mirrors are all taken to
image the entire upper hemisphere. Some values of k and h are plotted in Table 1 against
the corresponding mirror radius, for
Table
1: The mirror radius as a function of the mirror parameters (k and h) for
Mirror Radius Hyperboloid (k) Ellipsoid (k) Paraboloid (h)
4.3.1 Area of the Blur Region
In
Figures
13-15, we plot the area of the blur region (on the ordinate) against the distance
to the focused plane v (on the abscissa) for the hyperboloidal, ellipsoidal, and paraboloidal
mirrors. In each figure, we plot separate curves for different world point directions. The
angles are measures in degrees from the plane z = 0, and so the curve at 90 ffi corresponds
to the (impossible) world point directly upwards in the direction of the z-axis. For the
hyperboloid we set for the ellipsoid 0:11, and for the paraboloid
0:1. As can be seen in Table 1, these settings correspond to a mirror with radius 10 cm.
Qualitatively similar results were obtained for the other radii. Section 4.3.3 contains related
results for the other radii.
The smaller the area of the blur region, the better focused the image will be. We
see from the figures that the area never reaches exactly zero, and so an image formed using
these catadioptric sensors can never be perfectly focused. However, the minimum area is
very small, and in practice there is no problem focusing the image for a single world point.
Moreover, it is possible to use additional corrective lenses to compensate for most of this
effect [Hecht and Zajac, 1974].
Note that the distance at which the image of the world point will be best focused (i.e.
somewhere in the range 0.9-1.15 meters) is much less than the distance from the pinhole
to the world point (approximately 1 meter from the pinhole to the mirror plus 5 meters
from the mirror to the world point). The reason for this effect is that the mirror is curved.
For the hyperboloidal and paraboloidal mirrors which are convex, the curvature tends to
increase the divergence of rays coming from the world point. For these rays to be converged
and the image focused, a larger distance to the image plane u is needed. A larger value of
u corresponds to a smaller value of v, the distance to the focused plane. For the concave
ellipsoidal mirror, the mirror converges the rays to the extent that a virtual image is formed
between the mirror and the lens. The lens must be focused on this virtual image.
4.3.2 Shape of the Blur Region
Next, we provide an explanation of the fact that the area of the blur region never exactly
reaches zero. For a conventional lens, the blur region is a circle. In this case, as the focus
setting is adjusted to focus the lens, all points on the blur circle move towards the center
of the blur circle at a rate which is proportional to their distance from the center of the
blur circle. Hence, the blur circle steadily shrinks until the blur region has area 0 and the
lens is perfectly focused. If the focus setting is moved further in the same direction, the
blur circle grows again as all the points on it move away from the center.
For a catadioptric sensor using a curved mirror, the blur region is only approximately
a circle for all three of the mirror shapes. Moreover, as the image is focused, the speed
with which points move towards the center of this circle is dependent on their position in a
much more complex way than for a single lens. The behavior is qualitatively the same for
all of the mirrors and is illustrated in Figure 16. From Figure 16(a) to Figure 16(e), the
for the hyperboloidal mirror with In this example, we have meter, the radius of
the lens aperture 10 millimeters, and the distance from the viewpoint to the world point
meters. We plot curves for 7 different world points, at 7 different angles from the plane
The area of the blur region never becomes exactly zero and so the image can never be perfectly
focused. However, the area does become very small and so focusing on a single point is not a
problem in practice. Note that the distance at which the image will be best focused (around 1.0-
1.15 meters) is much less than the distance from the pinhole to the world point (approximately
1 meter from the pinhole to the mirror plus 5 meters from the mirror to the world point.) The
reason is that the mirror is convex and so tends to increase the divergence of rays of light.
for the ellipsoidal mirror with 0:11. The other settings are the same as for the hyperboloidal
mirror in Figure 13. Again, the distance to the focused plane is less than the distance to the point
in the world, however the reason is different. For the concave ellipsoidal mirror, a virtual image
is formed between the mirror and the lens. The lens needs to focus on this virtual image.
for the paraboloidal mirror with 0:1. The settings are the same as for the hyperboloidal
mirror, except the size of the apertures. The limiting aperture is the one at the focal point. It
is chosen so that it lets through exactly the same rays of light that the 10 mm one does for the
hyperboloidal mirror for a point 1 meter away on the optical axis. The results are qualitatively
very similar to the hyperboloidal mirror.
-0.006
-0.0020.0020.006
(a) Hyperboloid 1082 mm (b) Hyperboloid 1083.25 mm
(c) Ellipsoid 1003.75 mm (d) Ellipsoid 1004 mm
Paraboloid 1068.63 mm (f) Paraboloid 1069 mm
Figure
16: The variation in the shape of the blur region as the focus setting is varied. Note that
all of the blur regions in this figure are relatively well focused. Also, note that the scale of the 6
figures are all different.
blur region gets steadily smaller, and the image becomes more focused. In Figure 16(f), the
focus is beginning to get worse again. In Figure 16(a) the blur region is roughly a circle,
however as the focus gets better, the circle begins to overlap itself, as shown in Figure 16(b).
The degree of overlap increases in Figures 16(c) and(d). (These 2 figures are for the ellipse
and are shown to illustrate how similar the blur regions are for the 3 mirror shapes. The
only difference is that the region has been reflected about a vertical axis since the ellipse
is a concave mirror.) In Figure 16(e), the image is as well focused as possible and the blur
region completely overlaps itself. In Figure 16(f), the overlapping has begun to unwind.
Finally, in Figure 17, we illustrate how the blur regions vary with the angle of the
point in the world, for a fixed focal setting. In this figure, which displays results for the
hyperboloid with 0:11, the focal setting is chosen so that the point at 45 ffi is in focus.
As can be seen, for points in the other directions the blur region can be quite large and so
points in those directions are not focused. This effect, known as field curvature [Hecht and
Zajac, 1974], is studied in more detail in the following section.
4.3.3 Focal Settings
Finally, we investigated how the focus setting that minimizes the area of the blur region
(see
Figures
changes with the angle ' which the world point w makes with the plane
The results are presented in Figures 18-20. As before, we set assumed
the radius of the lens aperture to be 10 millimeters (1 millimeter for the paraboloid), and
fixed the world point to be l = 5 meters from the effective viewpoint. We see that the best
focus setting varies considerably across the mirror for all of the mirror shapes. Moreover,
the variation is roughly comparable for all three mirrors (of equal
In practice, these results, often referred to as "field curvature" [Hecht and Zajac,
1974], mean that it can sometimes be difficult to focus the entire scene at the same time.
-0.3
-0.2
-0.10.20.4
-0.4
-0.3
-0.2
(a) 1018.8 mm,
-0.4
-0.3
-0.2
-0.10.20.4
-0.4
-0.3
-0.2
(a) 1018.8 mm,
-0.4
-0.3
-0.2
-0.10.20.4
-0.4
-0.3
-0.2
(a) 1018.8 mm,
Figure
17: An example of the variation in the blur region as a function of the angle of the point
in the world. In this example for the hyperboloid with the point at 45 ffi is in focus, but
the points in the other directions are not.
Hyperboloid, k=6.10
Hyperboloid, k=11.0
Hyperboloid, k=21.0
Hyperboloid, k=51.0
Figure
18: The focus setting which minimizes the area of the blur region in Figure 13 plotted
against the angle ' which the world point w makes with the plane z = 0. Four separate curves
are plotted for different values of the parameter k. See Table 1 for the corresponding radii of
the mirrors. We see that the best focus setting for w varies considerably across the mirror. In
practice, these results mean that it can sometimes be difficult to focus the entire scene at the
same time, unless additional compensating lenses are used to compensate for the field curvature
[ Hecht and Zajac, 1974 ] . Also, note that this effect becomes less important as k increases and the
mirror gets smaller.
Ellipsoid, k=0.24
Ellipsoid, k=0.11
Ellipsoid, k=0.02
Figure
19: The focus setting which minimizes the area of the blur region in Figure 14 plotted
against the angle ' which the world point w makes with the plane z = 0. Four separate curves
are plotted for different values of the parameter k. See Table 1 for the corresponding radii of
the mirrors. The field curvature for the ellipsoidal mirror is roughly comparable to that for the
Paraboloid, k=0.20
Paraboloid, k=0.10
Paraboloid, k=0.05
Paraboloid, k=0.02
Figure
20: The focus setting which minimizes the area of the blur region in Figure 15 plotted
against the angle ' which the world point w makes with the plane z = 0. Four separate curves
are plotted for different values of the parameter h. See Table 1 for the corresponding radii of the
mirrors. The field curvature for the paraboloidal mirror is roughly comparable to that for the
Either the center of the mirror is well focused or the points around the periphery are
focused, but not both. Fortunately, it is possible to introduce additional lenses which
compensate for the field curvature [Hecht and Zajac, 1974]. (See the discussion at the end
of this paper for more details.) Also note that as the mirrors become smaller in size (k
increases for the hyperboloid, k decreases for ellipsoid, and h decreases for the paraboloid)
the effect becomes significantly less pronounced.
In this paper, we have studied three design criteria for catadioptric sensors: (1) the shape
of the mirrors, (2) the resolution of the cameras, and (3) the focus settings of the cameras.
In particular, we have derived the complete class of mirrors that can be used with a single
camera to give a single viewpoint, found an expression for the resolution of a catadioptric
sensor in terms of the resolution of the conventional camera(s) used to construct it, and
presented detailed analysis of the defocus blur caused by the use of a curved mirror.
There are a number of possible uses for the (largely theoretical) results presented
in this paper. Throughout the paper we have touched on many of their uses by a sensor
designer. The results are also of interest to a user of a catadioptric sensor. We now briefly
mention a few of the possible uses, both for sensor designers and users:
ffl For applications where a fixed viewpoint is not a requirement, we have derived the
locus of the viewpoint for several mirror shapes. The shape and size of these loci may
be useful for the user of such a sensor requiring the exact details of the geometry. For
example, if the sensor is being used in an stereo rig, the epipolar geometry needs to
be derived precisely.
ffl The expression for the resolution of the sensor could be used by someone applying
image processing techniques to the output of the sensor. For example, many image
enhancement algorithms require knowledge of the solid angles of the world integrated
over by each pixel in sensor.
ffl Knowing the resolution function also allows a sensor designer to design a CCD with
non-uniform resolution to get an imaging system with a known (for example uniform)
resolution.
ffl The defocus analysis could be important to the user of a catadioptric sensor who
wishes to apply various image processing techniques, from deblurring to restoration
and super-resolution.
ffl Knowing the defocus function also allows a sensor designer to compensate for the
field curvature introduced by the use of a curved mirror. One method consists of
introducing optical elements behind the imaging lens. For instance, a plano-concave
lens placed flush with the CCD permits a good deal of field curvature correction.
(Light rays at the periphery of the image travel through a greater distance within the
plano-concave lens). Another method is to use a thick meniscus lens right next to
the imaging lens (away from the CCD). The same effect is achieved. In both cases,
the exact materials and curvatures of the lens surfaces are optimized using numerical
simulations. Optical design is almost always done this way as analytical methods are
far too cumbersome. See [Born and Wolf, 1965] for more details.
We have described a large number of mirror shapes in this paper, including cones,
spheres, planes, hyperboloids, ellipsoids, and paraboloids. Practical catadioptric sensors
have been constructed using most of these mirror shapes. See, for example, [Rees, 1970],
[Charles et al., 1987] , [Nayar, 1988], [Yagi and Kawato, 1990], [Hong, 1991], [Goshtasby and
Gruver, 1993], [Yamazawa et al., 1993], [Bogner, 1995], [Nalwa, 1996], and [Nayar, 1997a].
As described in [Chahl and Srinivassan, 1997], even more mirror shapes are possible if we
relax the single-viewpoint constraint. Which then is the "best" mirror shape to use?
Unfortunately, there is no simple answer to this question. If the application requires
exact perspective projection, there are three alternatives: (1) the ellipsoid, (2) the
hyperboloid, and (3) the paraboloid. The major limitation of the ellipsoid is that only a
hemisphere can be imaged. As far as the choice between the paraboloid and the hyperboloid
goes, using an orthographic imaging system does require extra effort on behalf of the
optical designer, but may also make construction and calibration of the entire catadioptric
system easier, as discussed in Section 2.4.
If the application at hand does not require a single viewpoint, many other practical
issues may become more important, such as the size of the sensor, its resolution variation
across the field of view, and the ease of mapping between coordinate systems. In this paper
we have restricted attention to single-viewpoint systems. The reader is referred to other
papers proposing catadioptric sensors, such as [Yagi and Kawato, 1990] , [Yagi and Yachida,
1991], [Hong, 1991], [Bogner, 1995], [Murphy, 1995], and [Chahl and Srinivassan, 1997], for
discussion of the practical merits of catadioptric systems with extended viewpoints.
Acknowledgements
The research described in this paper was conducted while the first author was a Ph.D.
student in the Department of Computer Science at Columbia University in the City of
New York. This work was supported in parts by the VSAM effort of DARPA's Image
Understanding Program and a MURI grant under ONR contract No. N00014-97-1-0553.
The authors would also like to thank the anonymous reviewers for their comments which
have greatly improved the paper.
--R
The plenoptic function and elements of early vision.
A theory of catadioptric image forma- tion
Introduction to panoramic imaging.
Principles of Optics.
Reflective surfaces for panoramic imaging.
How to build and use an all-sky camera
A natural classification of curves and surfaces with reflection properties.
The lumi- graph
Design of a single-lens stereo camera system
Image based homing.
A stereo viewer based on a single camera with view-control mechanism
Application of panoramic imaging to a teleoperated lunar rover.
A true omnidirectional viewer.
Catadioptric image formation.
Recovering depth using a single camera and two specular spheres.
Catadioptric omnidirectional camera.
Omnidirectional video camera.
Stereo with mirrors.
Generation of perspective and panoramic video from omnidirectional video.
Panoramic television viewing system.
Panoramic scene analysis with conic projection.
Omnidirectional imaging with hyperboloidal projection.
Obstacle avoidance with omnidirectional image sensor HyperOmni Vision.
--TR
The lumigraph
Catadioptric Omnidirectional Camera
A Theory of Catadioptric Image Formation
Stereo with Mirrors
--CTR
Tom Svoboda , Tom Pajdla, Epipolar Geometry for Central Catadioptric Cameras, International Journal of Computer Vision, v.49 n.1, p.23-37, August 2002
Ko Nishino , Shree K. Nayar, Corneal Imaging System: Environment from Eyes, International Journal of Computer Vision, v.70 n.1, p.23-40, October 2006
Cdric Demonceaux , Pascal Vasseur, Markov random fields for catadioptric image processing, Pattern Recognition Letters, v.27 n.16, p.1957-1967, December 2006
Cdric Demonceaux , Pascal Vasseur, Markov random fields for catadioptric image processing, Pattern Recognition Letters, v.27 n.16, p.1957-1967, December, 2006
Yasushi Yagi , Wataru Nishi , Nels Benson , Masahiko Yachida, Rolling and swaying motion estimation for a mobile robot by using omnidirectional optical flows, Machine Vision and Applications, v.14 n.2, p.112-120, June
Xianghua Ying , Zhanyi Hu, Catadioptric Camera Calibration Using Geometric Invariants, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.10, p.1260-1271, October 2004
Ko Nishino , Shree K. Nayar, Eyes for relighting, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
C. Lpez-Franco , E. Bayro-Corrochano, Omnidirectional Robot Vision Using Conformal Geometric Computing, Journal of Mathematical Imaging and Vision, v.26 n.3, p.243-260, December 2006
Christopher Geyer , Kostas Daniilidis, Catadioptric Projective Geometry, International Journal of Computer Vision, v.45 n.3, p.223-243, December 2001
Michael D. Grossberg , Shree K. Nayar, The Raxel Imaging Model and Ray-Based Calibration, International Journal of Computer Vision, v.61 n.2, p.119-137, February 2005
Steven M. Seitz , Jiwon Kim, The Space of All Stereo Images, International Journal of Computer Vision, v.48 n.1, p.21-38, June 2002
Rahul Swaminathan , Michael D. Grossberg , Shree K. Nayar, Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis, International Journal of Computer Vision, v.66 n.3, p.211-229, March 2006
Christopher Geyer , Kostas Daniilidis, Paracatadioptric Camera Calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.5, p.687-695, May 2002
Yasushi Yagi , Kousuke Imai , Kentaro Tsuji , Masahiko Yachida, Iconic Memory-Based Omnidirectional Route Panorama Navigation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.1, p.78-87, January 2005
Constantin A. Rothkopf , Jeff B. Pelz, Head movement estimation for wearable eye tracker, Proceedings of the 2004 symposium on Eye tracking research & applications, p.123-130, March 22-24, 2004, San Antonio, Texas | panoramic imaging;defocus blur;sensor resolution;omnidirectional imaging;sensor design;image formation |
339377 | Trust Region Algorithms and Timestep Selection. | Unconstrained optimization problems are closely related to systems of ordinary differential equations (ODEs) with gradient structure. In this work, we prove results that apply to both areas. We analyze the convergence properties of a trust region, or Levenberg--Marquardt, algorithm for optimization. The algorithm may also be regarded as a linearized implicit Euler method with adaptive timestep for gradient ODEs. From the optimization viewpoint, the algorithm is driven directly by the Levenberg--Marquardt parameter rather than the trust region radius. This approach is discussed, for example, in [R. Fletcher, Practical Methods of Optimization, 2nd ed., John Wiley, New York, 1987], but no convergence theory is developed. We give a rigorous error analysis for the algorithm, establishing global convergence and an unusual, extremely rapid, type of superlinear convergence. The precise form of superlinear convergence is exhibited---the ratio of successive displacements from the limit point is bounded above and below by geometrically decreasing sequences. We also show how an inexpensive change to the algorithm leads to quadratic convergence. From the ODE viewpoint, this work contributes to the theory of gradient stability by presenting an algorithm that reproduces the correct global dynamics and gives very rapid local convergence to a stable steady state. | Introduction
. This work involves ideas from two areas of numerical anal-
ysis: optimization and the numerical solution of ordinary differential equations
(odes). We begin by pointing out a connection between the underlying mathematical
problems.
Given a smooth function f R, an algorithm for unconstrained optimization
seeks to find a local minimizer ; that is, a point x ? such that f(x ?
for all x in some neighborhood of x ? . The following standard result gives necessary
conditions and sufficient conditions for x ? to be a local minimizer. Proofs may be
found, for example, in [5, 6, 7].
Theorem 1.1. The conditions rf(x ? positive semi-definite
are necessary for x ? to be a local minimizer, whilst the conditions rf(x ?
positive definite are sufficient.
On the other hand, given a smooth function F
may consider the ode system
Now suppose that F in (1.1) has the form F(x) j \Gammarf (x). By the Chain Rule, if
solves (1.1) then
d
dt
@f
dt
From (1.2) we see that along any solution of the ode the quantity f(x(t)) decreases
in Euclidean norm as t increases. Moreover, it strictly decreases unless
Hence, solving the ode up to a large value of t may be regarded as an attempt to
compute a local minimum of f . The conditions given in Theorem 1.1 may now be
interpreted as necessary conditions and sufficient conditions for x ? to be a linearly
stable fixed point of the ode.
If it is possible to write F(x) in the form \Gammarf (x) then the ode (1.1) is said
to have a gradient structure; see, for example, [19]. Several authors have noted the
Department of Mathematics, University of Strathclyde, Glasgow, G1 1XH, UK. Supported by
the Engineering and Physical Sciences Research Council of the UK under grant GR/K80228. This
manuscript appears as University of Strathclyde Mathematics Research Report 3 (1998).
connection between optimization and gradient odes. Schropp [18] examined fixed
timestep Runge-Kutta (rk) methods from a dynamical systems viewpoint, and
found conditions under which the numerical solution of the gradient ode converges
to a stationary point of f . Schropp also gave numerical evidence to suggest that
there are certain problem classes for which the ode formulation is preferable to
the optimization analogue. The book [11] shows that many problems expressible in
optimization terms can also be written as odes, often with gradient structure. Chu
has exploited this idea in order to obtain theoretical results and numerical methods
for particular problems; see [3] for a review. In the optimization literature, the
gradient ode connection has also been mentioned; see, for example, the discussion
on unconstrained optimization in [17]. Related work [1, 2] has looked at the use of
ode methods to solve systems of nonlinear algebraic equations.
The study of numerical methods applied to odes in gradient form has lead to
the concept of gradient stability [14, 20, 21]. The gradient structure arises in many
application areas, and provides a very useful framework for analysis of ode algo-
rithms. (In contrast with the classical linear and strictly contractive test problems,
gradient systems allow multiple equilibria.)
In [14, 20] positive results were proved about the ability of rk methods to preserve
the gradient structure, and hence to capture the correct long term dynamics,
for small, fixed timesteps. Most of these results require an extra assumption on
F that imposes either a one-sided Lipschitz condition or a form of dissipativity.
Adaptive rk methods, that is, methods that vary the timestep dynamically, were
analyzed in [21]. Here the authors considered a very special class of rk formula
pairs and showed that a traditional error control approach forces good behavior for
sufficiently small values of the error tolerance, independently of the initial data.
This would be regarded as a global convergence proof in the optimization litera-
ture. These results require a one-sided Lipschitz condition on F. A similar result
was proved in [12] for general ode methods that successfully control the local error-
per-unit-step. In this case the error tolerance must be chosen in a way that depends
on the initial data.
The work presented here has two main contributions.
ffl First, we note a close similarity between a trust region, or Levenberg-
Marquadt, algorithm for optimization and an adaptive, linearized, implicit
Euler method for gradient odes. We analyze the optimization algorithm
and establish a new result about its convergence properties. This also adds
to the theory of gradient stability for odes. Under a mild assumption on
f we show that the method is globally convergent and enjoys a very rapid
form of superlinear convergence. (The notion of the rate of convergence
to equilibrium is widely studied in optimization, but appears not to have
been considered in the gradient ode context. It is easily seen that any fixed
timestep rk formula that approaches equilibrium will do so at a generically
linear rate, in terms of the timestep number.)
ffl Second, we use the ideas from the gradient analysis to construct a timestepping
method for general odes that gives rapid superlinear local convergence
to a stable fixed point.
The presentation is organized as follows. In the next section we introduce New-
ton's Method and some simple numerical ode methods. Section 3 is concerned with
a specific trust region algorithm for unconstrained optimization. The algorithm,
which is essentially the same as one found in [6], is defined in x3.1. A non-rigorous
discussion of the convergence properties is given in x3.2, and the main convergence
theorems are proved in x3.3. The algorithm may also be regarded as a timestepping
process for a gradient ode algorithm and the analogous results are stated in x4. In
x5 we develop a timestepping scheme for general odes that gives superlinear local
convergence to stable fixed points.
2. Numerical Methods. Most numerical methods for finding a local minimizer
of f begin with an initial guess x 0 and generate a sequence fx k g. Similarly,
one-step methods for the ode (1.1) produce a sequence fx k g with x k - x(t k ).
The time-levels ft k g are determined dynamically by means of the timestep \Deltat k :=
The Steepest Descent method for optimization has the form
where ff k is a scalar that may arise, for example, from a line search. This is equivalent
to the explicit Euler Method applied to the corresponding gradient ode with
timestep \Deltat k j ff k . We note in passing that the poor performance of Steepest
Descent in the presence of steep-sided narrow valleys is analogous to the poor performance
of Euler's Method on stiff problems. Indeed, Figure 4j in [7] and Figure 1.2
in [10] illustrate essentially the same behavior, viewed from these two different perspectives
Newton's Method for optimization is based on the local quadratic model
Note that q k (ffi) is the quadratic approximation to f(x k + ffi) that arises from a
Taylor series expansion about x k . If r 2 f(x k ) is positive definite then q k (ffi) has
the unique minimizer
We thus arrive at Newton's
Method
The following result concerning the local quadratic convergence of Newton's
Method may be found, for example, in [5, 6, 7].
Theorem 2.1. Suppose that f 2 C 2 and that r 2 f satisfies a Lipschitz condition
in a neighborhood of a local minimizer x ? . If x 0 is sufficiently close to x ? and
positive definite, then Newton's method is well defined for all k and
converges at second order.
The Implicit Euler Method applied to (1.1) with F(x) j \Gammarf (x) using a
timestep of \Deltat k produces the equation
This is generally a nonlinear equation that must be solved for x k+1 . Applying
one interation of Newton's Method (that is, Newton's Method for solving nonlinear
equations) with initial guess x
This method is sometimes referred to as the Linearized Implicit Euler Method; see,
for example, [22]. Note that for large values of \Deltat k we have
and the ode method looks like Newton's Method (2.3). On the other hand, for
small \Deltat k we have
which corresponds to a small step in the direction of steepest descent (2.1). Hence,
at the extremes of large and small \Deltat k , the ode method behaves like well-known
optimization methods. However, we can show much more: for any value of \Deltat k ,
the method (2.5) can be identified with a trust region process in optimization.
This connection was pointed out by Goldfarb in the discussion on unconstrained
optimization in [17]. The relevant optimization theory is developed in the next
section.
3. A Trust Region Algorithm.
3.1. The Algorithm. We have seen that Newton's Method is based on the
idea of minimizing the local quadratic model q k (ffi) in (2.2) on each step. Since the
model is only valid locally, it makes sense to restrict the increment; that is, to seek
an increment ffi that minimizes q k (ffi) subject to some constraint kffik - h k . Here h k
is a parameter that reflects how much trust we are prepared to place in the model.
Throughout this work we use k \Delta k to denote the Euclidean vector norm and
the corresponding induced matrix norm. In this case a solution to the locally-
constrained quadratic model problem can be characterized. The following Lemma is
one half of [6, Theorem 5.2.1]; a weaker version was proved in [8]. For completeness,
we give a proof here.
Lemma 3.1. Given G 2 R m\Thetam and g 2 R m , if, for some - 0,
\Gammag
and G+ -I is positive semi-definite, then b ffi is a solution of
min
subject to kffik - k b ffik:
Furthermore, if G+ -I is positive definite, then b ffi is the unique solution of (3.2).
Proof. In the case where G + -I is positive semi-definite, it is straightforward
to show that b ffi minimizes
Hence, for all ffi we have b
solves the problem (3.2). When G + -I is positive definite, the inequality is
strict for ffi 6= b ffi, and hence the solution is unique.
Note that Lemma 3.1 does not show how to compute an increment b ffi given
a trust region constraint kffik - h k . Such an increment may be computed or approximated
using an iterative technique; see, for example, [6, pages 103-107] or [5,
pages 131-143]. However, as mentioned in [6], it is reasonable to regard - in (3.1)
as a parameter that drives the algorithm-having chosen a value for - and checked
that G+-I is positive definite, we may solve the linear system (3.1) and a posteriori
obtain a trust region radius h k := k b ffik. It easily shown that if G + -I is positive
definite then increasing - in (3.1) decreases k b ffik.
These remarks motivate Algorithm 3.2 below. We use - min (M) to denote the
smallest eigenvalue of a symmetric matrix M and let ffl ? 0 be a small constant.
Given x 0 and - 0 ? 0 a general step of the trust region algorithm proceeds as follows.
Algorithm 3.2.
Compute
Solve
Compute
Compute
Compute
using (3.3)
else
set r
If r k - 0
set x
else
set x
The algorithm involves the function
Note that r k records the ratio of the reduction in f from x k to x and the
reduction that is predicted by the local quadratic model. If r k is significantly less
than 1 then the model has been over-optimistic. This information is used in (3.3) to
update the trust region parameter -. In the case where the local quadratic model
has performed poorly, we double the - parameter, which corresponds to reducing
the trust region radius on the next step. If the performance is reasonable, we retain
the same value for -. In the case of good performance we halve the value of -,
thereby indirectly increasing the trust region radius.
We emphasize that Algorithm 3.2 is a trust region algorithm in the sense that
on each step ffi k solves the local restricted problem
min
subject to kffik - kffi k k:
Also, we remark that the algorithm is essentially the same as that described in [6,
pages 102-103]. The underlying idea of adding a multiple of the identity matrix to
ensure positive definiteness was first applied to the case where f has sum-of-squares
form, leading to the Levenberg-Marquadt algorithm. Goldfeld et al. [8] extended
the approach to a general objective function, and gave some theoretical justification.
Theorems 5.1.1 and 5.1.2 of [6] provide a general convergence theory for a wide
class of trust region methods. However, these results do not apply immediately to
Algorithm 3.2, since the algorithm does not directly control the radius h k := kffi k k,
but, rather, controls it indirectly via adaption of - k . In fact, we will see that the
behavior established in Theorem 5.1.2 of [6], local quadratic convergence, does not
hold for Algorithm 3.2. We are not aware of any existing convergence analysis that
applies directly to Algorithm 3.2, except for general results of the form encapsulated
in the Dennis-Mor'e Characterization Theorem for superlinear convergence [4, 5, 6]
and the "strongly consistent approximation to the Hessian" theory given in [16].
These references are discussed further in the remarks that follow Theorem 3.4.
3.2. Motivation for the Convergence Analysis. The proofs in x3.3 and
the appendix are rather technical, and hence, to help orient the reader, we give
below a heuristic discussion of the key points.
Theorem 3.3 establishes global convergence, and the proof uses arguments that
are standard in the optimization literature. Essentially, global convergence follows
from the fact that when the local quadratic model is inaccurate the algorithm
chooses a direction that is close to that of steepest descent. Perhaps of more interest
is the rate of local convergence. Suppose that x k
positive definite, and suppose that for k - b k we have r k ? 3=4, and
hence - It follows that, for some constant C 1 ,
Note also that G k and G \Gamma1
are bounded for large k.
given a large k, let ffi
Newt
k denote the correction that would arise from
Newton's method applied at x k , so that we have
Newt
Expanding (3.6), using (3.7),
Newt
Letting d k := x
Hence, in (3.8)
Newt
Using (3.5) we find that
Newt
for some constant C 2 .
Now, since x
Newt
k is the Newton step from x k , we have, from Theorem 2.1,
Newt
for some constant C 3 . The triangle inequality gives
Newt
Newt
and inserting (3.9) and (3.10) we arrive at the key inequality
for some constant C 4 . The first term on the right-hand side of (3.11) distinguishes
the algorithm from Newton's Method, and dominates the rate of convergence. To
proceed, it is convenient to consider a shifted sequence; let b e k := e k+N , for some
fixed N to be determined. Then from (3.11),
be k
Choosing N so that 2 N ? C 4 , we have
Now, neglecting the O(be 2
leads to
If, in addition to ignoring the O(be 2
in (3.13) we also assume that equality
holds, then we get equality in (3.14) and
be 0
but
be 0
We see from (3.15) that the error sequence is not quadratically convergent. However,
(3.16) corresponds to a very rapid form of superlinear convergence. Although this
analysis used several simplifying assumptions, the main conclusions can be made
rigorous, as we show in the next subsection. The type of superlinear convergence
that we establish is likely to be as good as quadratic convergence in practice. This
matter is discussed further after the proof of Theorem 3.4.
3.3. Convergence Analysis of the Trust Region Algorithm. The following
theorem shows that Algorithm 3.2 satisfies a global convergence result. The
structure of the proof is similar to that of [6, Theorem 5.1.1].
Theorem 3.3. Suppose that Algorithm 3.2 produces an infinite sequence such
that x k 2 B ae R m and g k 6= 0 for all k, where B is bounded and f 2 C 2 on B.
Then there is an accumulation point x 1 that satisfies the necessary conditions for
a local minimizer in Theorem 1.1.
Proof. Any sequence in B must have a convergent subsequence. Hence, we have
collects the indices in the convergent subsequence. It
is convenient to distinguish between two cases:
(i) sup
Case (i): From the form of V (r; -) in (3.3), there must be an infinite subsequence
whose indices form a set b
4 . Also, using the
boundedness of G k and g k , we have
and hence
Suppose that the gradient limit there exists a
descent direction s, normalized so that ksk = 1, such that
since ffi k solves the local restricted subproblem (3.4) we have q k (kffi k ks) -
ks T
Also, a Taylor expansion of f(x
We conclude from (3.18), (3.20) and (3.21) that r
S , which
contradicts r k ! 1. Hence,
Now suppose that G 1 := G(x 1 ) is not positive semi-definite; so there is a
direction v, with
Pick
S, and
S with k - b k. Then, since
solves the local restricted subproblem (3.4), we have
and hence
It follows from (3.18), (3.21) and (3.23) that r
S , which
contradicts
4 . Hence, G 1 is positive semi-definite.
Case (ii): From the form of V (r; -) in (3.3), there must be an infinite subsequence
whose indices form a set -
If
and hence
where Gmax := sup x2B
This gives
Hence, removing the earlier indices from -
S if necessary, we have, with h k := kffi k k,
min
we have \Deltaf
S. From r k - 1
4 it follows that \Deltaq k ! 0. Let
kffik - h and set -
x
Hence, is feasible on the subproblem that is solved by ffi k , and so
Letting
S , it follows from (3.25) that q k ( - ffi) - f
also minimizes q 1 (ffi) on kffik - h, and since the constraint is inactive, the necessary
conditions of Theorem 1.1 must be satisfied. Hence, g 1 6= 0 is contradicted.
Now, with case (ii), we have
as
S . Suppose G 1 is not positive semi-definite. Then the arguments
giving (3.22)-(3.23) may be applied, and we conclude that r
in -
S . It then follows from (3.3) that - k ! 0, and since - min must
have G 1 positive semi-definite. This gives the required contradiction.
Note that, as mentioned in [6], since the algorithm computes a non-increasing
sequence f k , the bounded region B required in this theorem will exist if any level
set is bounded.
In Theorem 3.3 we assume that g k 6= 0 for all k. If g b
the algorithm essentially terminates, giving x
However, in this case we cannot conclude that r 2 f(x k ) is positive semi-definite for
The next theorem quantifies the local convergence rate of Algorithm 3.2. The
first part of the proof is based on that of [6, Theorem 5.1.2].
Theorem 3.4. If the accumulation point x 1 of Theorem 3.3 also satisfies
the sufficient conditions for a local minimizer in Theorem 1.1, then for the main
sequence 1. Further, the displacement error e k :=
for some constant C, and if e k ? 0 for all k,
e
for constants e
C, but the ratio e k+1 =e 2
k is unbounded.
Proof. First, we show that case (i) of (3.17) in the proof of Theorem 3.3 can
be ruled out. Suppose that case (i) arises. Then r
k !1 in b
S.
positive definite, the matrix G k is also positive definite for large
k in b
S . In this case the Newton correction, ffi
Newt
Newt
\Gammag k , is
well defined and gives a global minimum of the local quadratic model q k . Define ff
by ffkffi Newt
and note that since ffi k solves the local restricted subproblem
(3.4), we have ff - 1. Then
Newt
Newt
Newt
Newt
Newt
Hence, using f
Newt
Newt
where - min ? 0 is a lower bound for the smallest eigenvalue of G k for large k in b
S .
It follows that
We may now conclude from (3.21) that r k ! 1 as
S . Hence, case (i)
cannot arise.
For case (ii), we have
as k !1 with k 2 -
S . Further, since
is a lower bound for the smallest eigenvalue of G k for large k in b
S .
It follows from (3.21) that as k !1 in -
S we must have
Having established that - k ! 0, we now know that the correction used in the
algorithm looks like the Newton correction ffi
Newt
k , which satisfies G k ffi
Newt
\Gammag k .
Let x Newt
Newt
k . Also, let d k := x
k !1 in -
S, and, by the triangle inequality,
The quadratic convergence property of Newton's Method given in Theorem 2.1
implies that for x k is sufficiently close to x 1
for some constant A 1 .
Expanding the other term in (3.31), we find
Newt
k ), we find that
Using (3.32) and (3.33) in (3.31) gives, for large k 2 -
e k+1 --
where A 2 is a constant.
Repeating the arguments that generated the inequalities (3.29) and (3.30), we
can show that there is a neighborhood N around x 1 such that if x
then r k - 3=4, so that - =2. Hence, from (3.34), there is some - k 2 -
S for
which x- k 2 N and the main sequnce lies in N for k -
k. So in the main sequence
we have x large k.
Hence, (3.34) may be extended to the bound
where A 3 and A 4 are constants. Lemma A.1 now gives (3.26).
To obtain a lower bound on e k+1 we use the triangle inequality in the form
From (3.32) and (3.33) we have
A 5
for constants A 5 ? 0 and A 6 . Lemma A.1 gives the required result.
We now list a number of remarks about Theorem 3.4.
1. The theorem shows that Algorithm 3.2 does not achieve a quadratic local
convergence rate. This is caused by the fact that - k does not approach
zero quickly enough. We have which is reflected in the first
term on the right-hand side of (3.35). A straightforward adaptation of the
proof shows that by increasing the rate at which - k ! 0, it is possible to
make the second term on the right-hand side of (3.35) significant, so that
quadratic convergence is recovered. For example, this occurs if we alter the
strategy for changing - k so that -
(and - otherwise). However, as explained in item 4
below, we would not expect this change to improve performance in practice.
Quadratic convergence is also discussed in item 5 below.
2. The power k 2 =3 appearing in (3.26) and (3.27) has been chosen partly
on the basis of simplicity-it is clear from the proofs of Lemma A.1 and
Theorem 3.4 that it can be replaced by ak 2 , for any a ! 1=2. (This will, of
course, cause the constant C to change.)
3. It is also clear from the proof that the result is independent of the precise
numerical values appearing in the algorithm. The values 1=4 and 3=4 in
(3.3) can be replaced by any ff and fi, respectively, with
the factor 2 in (3.3) can be replaced by any factor greater than unity. If
the factor 1=2 in (3.3) is replaced by 1=K, for K ? 1, then the statement of
the theorem remains true with powers of 2 replaced by powers of K. (The
changes mentioned here will, of course, alter the constants C, e
C and b
C.)
4. Theorem 3.4 shows that e k+1 =e k ! 0, and hence the convergence rate is
superlinear. However, the geometrically decreasing upper and lower bounds
on e k+1 =e k in (3.28) give us much more information. Asymptotically, whilst
Newton's Method gives twice as many bits of accuracy per step, the bound
(3.28) corresponds to k more bits of accuracy on the kth step. In both cases,
the asymptotic regime where e k is small enough to make the convergence
rate observable, but not so small that rounding errors are significant, is
likely to consist of only a small number of steps.
5. Several authors have found conditions that are sufficient, or necessary and
sufficient, for superlinear convergence of algorithms for optimization or
rootfinding. The most comprehensive result of this form is the Dennis-Mor'e
Characterization Theorem [4], [5, Theorem 8.2.4] and [6, Theorem 6.2.3].
Also, section 11.2 of [16] analyzes a class of rootfinding algorithms that employ
"consistent approximations to the Hessian", and this approach may
be used to establish superlinear convergence of Algorithm 3.2. However,
these references, which cover general classes of algorithms, do not derive
sharp upper and lower bounds on the rate of superlinear convergence of the
type given in Theorem 3.4. In the terminology of [16, x11.2], Algorithm 3.2
uses a strongly consistent approximation to the Hessian and superlinear
convergence is implied by - k ! 0. It also follows from [16, Result 11.2.7]
that quadratic convergence arises if we ensure that - k - Ckg k k and convergence
at R-order at least (1
5)=2 occurs if - k - Ckx
some constant C.
4. Timestepping on Gradient Systems. If we identify the trust region
parameter - k with the inverse of the timestep \Deltat k , then the Linearized Implicit
Euler Method (2.5) is identical to the updating formula in Algorithm 3.2. Hence,
Algorithm 3.2 can be regarded as an adaptive Linearized Implicit Euler Method for
gradient odes, and the convergence analysis of x3 applies. For completeness, we
re-write Algorithm 3.2 as a timestepping algorithm.
Given \Deltat 0 ? 0 and x 0 (= x init ), a general step of the algorithm for the gradient
system (1.1) with F(x) j \Gammarf (x) proceeds as follows.
Algorithm 4.1.
Compute
Solve
Compute
Compute
Compute
using (4.1)
else
set r
If r k - 0
set x
else
set x
The appropriate analogue of (3.3) is the function
\Deltat; 1- r - 3;
The following result is a re-statement of Theorems 3.3 and 3.4 in this context.
Theorem 4.2. Suppose that Algorithm 4.1 for (1.1) with F(x) j \Gammarf (x)
produces an infinite sequence such that x
is bounded and f 2 C 2 on B. Then there is an accumulation point x 1 that satisfies
the necessary conditions for a local minimizer in Theorem 1.1.
If the accumulation point x 1 also satisfies the sufficient conditions for a local
minimizer in Theorem 1.1, then for the main sequence
1. Further, the displacement error e k :=
for some constant C, and if e k ? 0 for all k,
e
for constants e
C, but the ratio e k+1 =e 2
k is unbounded.
In addition to the remarks at the end of x3, the following points should be
noted.
1. Algorithm 4.1 requires a check on the positive definiteness of the symmetric
. This is an unusual requirement for a timestepping
algorithm; however, we point out that an inexpensive and numerically stable
test can be performed in the course of a Cholesky factorization [13,
page 225]. If the test - min is omitted from Algorithm 4.1
then the local convergence rate is unaffected, but the global convergence
proof breaks down.
2. The rule for changing timestep is different in spirit to the usual local error
control philosophy for odes [9, 10]. This is to be expected, since the aim
of reaching equilibrium as quickly as possible is at odds with the aim of
following a particular trajectory accurately in time. The timestep control
policy in Algorithm 4.1 is based on a measurement of closeness to linearity
of the ode, and this idea is generalized in the next section. We also
note that local error control algorithms typically involve a user-supplied
tolerance parameter, with the understanding that a smaller choice of tolerance
produces a more accurate solution. Algorithm 4.1 on the other hand,
involves fixed parameters.
3. In [12] it is shown that, under certain assumptions, the use of local error
control on gradient odes forces the numerical solution close to equilibrium.
(Typically the solution remains within O(-) of an equilibrium point, where
- is the tolerance parameter.) This suggests that local error control may
form an alternative to the positive definiteness test as a means of ensuring
global convergence. Having driven the solution close to equilibrium with
local error control, the closeness to linearity test could be used to give
superlinear convergence.
5. Timestepping to a General, Stable Steady State. Motivated by x4,
we now develop an algorithm that gives rapid local convergence to stable equilibrium
for a general ode. We let F 0 denote the Jacobian of F, and define F k := F(x k ) and
k is not symmetric in general. The ratio
l k :=
indicates how close F is to behaving linearly in a region containing x k and x k+1 .
Given \Deltat 0 ? 0 and x 0 (= x init ), a general step of the algorithm for (1.1)
proceeds as follows.
Algorithm 5.1.
Compute
Solve
Compute using (5.1)
\Deltat
else
arbitrary
Since we are concerned only with local convergence properties, the action taken
when does not affect the analysis.
In the theorem below, B(fl; z) denotes the open ball of radius fl ? 0 about
z flg.
Theorem 5.2. Suppose that F(x ? in a neighborhood
of x ? and F 0 strictly negative real parts. Then given any
Algorithm 5.1 there is a fl ? 0 such that for any x 0 2
Further, the displacement error e k :=
for some constant C, and if e k ? 0 for all k,
e
for constants e
C, but the ratio e k+1 =e 2
k is unbounded.
Proof. There exists a b fl ? 0 such that F 0 (x) is nonsingular for x 2
Letting D 1 be an upper bound for kF
Now, from (5.1),
l k :=
and it follows from (5.5) that by reducing b fl if necessary we have jl
and F
k ), we have, for small e k ,
Now there exists a \Deltat ? ? 0 such that
\Deltat
By continuity, and by reducing b fl if necessary, we have
\Deltat
Hence, for large \Deltat we have a contraction in (5.6). We now show that for x 0
sufficiently close to x ? , \Deltat k increases beyond \Deltat ? while x k remains in B(bfl; x ? ).
Let
\Deltat-\Deltat
\Deltat
From (5.6), for x k 2 B(bfl; x ? ) we have
Let b k be such that 2 b k \Deltat 0 - \Deltat ? . From (5.8) we may choose sufficiently small
so that
Then, from (5.6) and (5.7), since \Deltat k - \Deltat ? for k - b k,
and hence e k ! 0 as k !1, and \Deltat \Deltat 0 for all k. We then have
F
\Deltat k
F
Since both F 0
k and F
are bounded, (5.6) gives
k and e k+1 - D 5
for constants completes the result.
It is straightforward to show that any fixed timestep rk or linear multistep
method can produce only a linear rate of convergence to equilibrium in general.
From Theorem 5.2, we see that Algorithm 5.1 provides a systematic means of increasing
the timestep in order to achieve a rapid form of superlinear convergence.
This has many applications, particularly in the area of computational fluid dynam-
ics, where it is common to solve a discretized steady partial differential equation
by introducing an artificial time derivative and driving the solution to equilibrium;
see, for example [22].
It is clear from the proof of Theorem 5.2 that for sufficiently large \Deltat 0 the
algorithm permits local convergence to an unstable fixed point. This can be regarded
as a consequence of the fact that the Implicit Euler Method is over-stable, in the
sense that the absolute stability region contains the infinite strip fz
in the right-half of the complex plane; see, for example, [15, page 229]. Another
explanation is that Newton's Method for optimizing f is identical to Newton's
Method for algebraic equations applied to see, for example, [5, page 100].
Hence, unless other measures are taken, there is no reason why stable fixed points
should be preferred. In Algorithm 4.1 for gradient odes we check that - min
which helps to force the numerical solution to a stable fixed
point. It is likely that traditional ode error control would also direct the solution
away from unstable fixed points, and hence the idea of combining optimization and
ode ideas forms an attractive area for future work.
Acknowledgements
This work has benefited from my conversations with a
number of optimizers and timesteppers; most notably Roger Fletcher and David
Griffiths.
Appendix
A. Convergence Rate Lemma.
Lemma A.1. Let
k for all k:
Then
for some constant C:
Further, if e k ? 0 for all k then
and if, in addition,
R
k for all k;
then
e
but the ratio e k+1 =e 2
k is unbounded.
Proof. Choose
We first prove a result under restricted circumstances, and then generalize to
the full result. We assume that
3:
Our induction hypothesis is
Note that, from (A.7), this holds for 3. If (A.8) is true for
using (A.1),
using (A.6) and (A.7),
Therefore, by induction, (A.8) is true for all k, if (A.7) holds.
Now, consider the shifted sequence b e k := e k+N , for some fixed N . We have
it is possible to choose N such that
3:
From (A.9) and (A.10), the result (A.8) holds for this shifted sequence, so
; for all k:
Translating this into a result for the original sequence, we find that,
Relabelling C as C=2 N 2 +N and letting b
Now
N . Hence,
Clearly, by increasing C, if necessary, the result will also hold for the finite sequence
N . Hence, (A.2) is proved. The inequality (A.3) follows after dividing
by e k in (A.1) and using (A.2).
From (A.2), for sufficiently large k we have 2 k T e k - R=2, so that
R=:
C:
Clearly, by reducing -
C, if necessary, this result must hold for all k. Now, reduce -
if necessary, so that
1. From (A.12),
letting e
e
e
Inequalities (A.12) and (A.13) give (A.5), as required.
Finally, using (A.2) and (A.5) we find that
e
--R
Fast local convergence with single and multistep methods for nonlinear equations.
The solution of nonlinear systems of equations by A-stable integration tech- niques
A list of matrix flows with applications.
Practical Methods of Optimization.
Practical Optimization.
Maximisation by quadratic hill-climbing
Solving Ordinary Differential Equations I
Solving Ordinary Differential Equations II
Optimization and Dynamical Systems.
Analysis of the dynamics of local error control via a piecewise continuous residual.
Accuracy and Stability of Numerical Algorithms.
Numerical Methods for Ordinary Differential Systems.
Iterative Solution of Nonlinear Equations in Several Variables.
Nonlinear Optimization
Using dynamical systems methods to solve minimization problems.
Nonlinear Dynamics and Chaos.
Model problems in numerical stability theory for initial value problems.
The essential stability of local error control for dynamical systems.
Global asymptotic behaviour of iterative implicit schemes.
--TR | steady state;global convergence;superlinear convergence;gradient system;levenberg-marquardt;unconstrained optimization;quadratic convergence |
339427 | An efficient algorithm for finding a path subject to two additive constraints. | One of the key issues in providing end-to-end quality-of-service guarantees in packet networks is how to determine a feasible route that satisfies a set of constraints while simultaneously maintaining high utilization of network resources. In general, finding a path subject to multiple additive constraints (e.g., delay, delay-jitter) is an NP-complete problem that cannot be exactly solved in polynomial time. Accordingly, heuristics and approximation algorithms are often used to address to this problem. Previously proposed algorithms suffer from either excessive computational cost or low performance. In this paper, we provide an efficient approximation algorithm for finding a path subject to two additive constraints. The worst-case computational complexity of this algorithm is within a logarithmic number of calls to Dijkstra's shortest path algorithm. Its average complexity is much lower than that, as demonstrated by simulation results. The performance of the proposed algorithm is justified via theoretical performance bounds. To achieve further performance improvement, several extensions to the basic algorithm are also provided at low extra computational cost. Extensive simulations are used to demonstrate the high performance of the proposed algorithm and to contrast it with other path selection algorithms. | Introduction
Integrated network services (e.g., ATM, Intserv, Diffserv) are being designed to provide quality-
of-service (QoS) guarantees for various applications such as audio, video, and data. Many of these
applications have multiple QoS requirements in terms of bandwidth, delay, delay-jitter, loss, etc. One
of the important problems in QoS-based service offerings is how to determine a route that satisfies
multiple constraints (or QoS requirements) while simultaneously achieving high utilization of network
resources. This problem is known as QoS (or constraint-based) routing, and is being extensively
* This work was supported by the National Science Foundation under Grant ANI 9733143 and Grant CCR 9815229.
investigated in the research community [4, 8, 13, 23, 25, 26, 28, 34]. The need for QoS routing
can be justified for both reservation-based services (e.g., Intserv, ATM) as well as reservationless
services (e.g., Diffserv). For example, in the ATM PNNI protocol [16], constraint-based routing
is performed by source nodes to determine suitable paths for connection requests. In the case of
Diffserv, the constraint-based routes can be requested, for example, by network administrators for
traffic engineering purposes. Provisioning of such routes can also be used to guarantee a certain
service level agreement (SLA) for aggregated flows [38].
In general, routing consists of two basic tasks: distributing the state information of the network
and searching this information for a feasible path with respect to (w.r.t.) given constraints. In this
paper, we focus on the second task, and assume that the true state of the network is available to
every node (e.g., via link-state routing) and that nodes use this state information to determine an
end-to-end feasible path (see [18] for QoS routing under inaccurate information). Each link in the
network is associated with multiple QoS parameters. These parameters can be roughly classified into
additive and non-additive [2, 35]. For additive parameters (e.g., delay), the cost of an end-to-end
path is given, exactly or approximately, by the sum of the individual link parameters (or weights)
along that path. In contrast, the cost of a path w.r.t. a non-additive parameter (such as bandwidth)
is determined by the value of that parameter at the bottleneck link. Non-additive parameters can
be easily dealt with as a preprocessing step by pruning all links that do not satisfy the requested
QoS values [36]. Hence, in this paper we will mainly focus on additive parameters. The underlying
problem of path selection subject to two constraints can be stated as follows.
Constrained Path Selection (MCP): Consider a network that is represented
by a directed graph is the set of nodes and E is the set of links. Each link
associated with two nonnegative additive QoS values: w 1 (u; v) and w 2 (u; v). Given two
constraints c 1 and c 2 , the problem is to find a path p from a source node s to a destination node t
such that w 1 (p) - c 1 and w 2 (p) - c 2 , where w
The MCP decision problem is known to be NP-complete [17, 24]. In other words, there is no efficient
(polynomial-time) algorithm that can surely find a feasible path w.r.t. both constraints unless
NP=P. A related yet slightly different problem is known as the restricted shortest path (RSP) prob-
lem, in which the returned path is required to satisfy one constraint while being optimal w.r.t. another
parameter. Any solution to the RSP problem can be also applied to the MCP problem. However,
the RSP problem is also known to be NP-complete [1, 17]. Both the MCP and RSP problems can
be solved via pseudo-polynomial-time algorithms in which the complexity depends on the actual
values of the link weights (e.g., maximum link weight) in addition to the size of the network [24, 21].
However, these algorithms are computationally expensive if the values of the link weights and the
size of network are large. To cope with the NP-completeness of these problems, researchers have
resorted to several heuristics and approximation algorithms.
One common approach to the RSP problem is to find the k-shortest paths w.r.t. a cost function
defined based on the link weights and the given constraint, hoping that one of these paths is feasible
and near-optimal [20, 32, 15, 19]. The value of k determines the performance and overhead of this
is large, the algorithm has good performance but its computational cost is prohibitive.
A similar approach to the k-shortest paths is to implicitly enumerate all feasible paths [3], but this
approach is also computationally expensive. In [37] the author proposed the Constrained Bellman-Ford
(CBF) algorithm, which performs a breadth-first-search by discovering paths of monotonically
increasing delay while maintaining lowest-cost paths to each visited node. Although this algorithm
exactly solves the RSP problem, its worst-case running time grows exponentially with the network
size. The authors in [31] proposed a distributed heuristic solution for the RSP problem with message
complexity of O(n 3 ), where n is the number of nodes. This complexity was improved in [39, 22].
In [21] the author presented two ffl-optimal approximation algorithms for RSP with complexities of
O(log log B(m(n=ffl)+log log B)) and O(m(n 2 =ffl) log(n=ffl)), where B is an upper bound on the solution
(e.g., the longest path), m is the number of links, and ffl is a quantity that reflects how far the solution
is from the optimal one. Although the complexities of these algorithms are polynomial, they are still
computationally expensive in large networks [29]. Accordingly, the author in [29] investigated the
hierarchical structure of such networks and provided a new approximation algorithm with better
scalability.
Although both the RSP and MCP problems are NP-complete, the latter problem seems to be
easier than the former in the context of devising approximate solutions. Accordingly, in [24] Jaffe
considered the MCP problem and proposed an intuitive approximation algorithm to it based on
minimizing a linear combination of the link weights. More specifically, this algorithm returns the
best path w.r.t. l(e) def
using Dijkstra's shortest path algorithm, where ff; fi
The key issue here is to determine the appropriate ff and fi such that an optimal path w.r.t. l(e) is
likely to satisfy the individual constraints. In [24] Jaffe determined two sets of values for ff and fi
based on minimizing an objective function of the form g. For
the RSP problem, the authors in [6] proposed a similar approximation algorithm to Jaffe's, which
dynamically adjusts the values of ff and fi. However, the computational complexity of this algorithm
grows exponentially with the size of the network. Chen and Nahrstedt proposed another heuristic
algorithm that modifies the problem by scaling down the values of one link weights to bounded
integers [7]. They showed that the modified problem can be solved by using Dijkstra's (or Bellman-
Ford) shortest path algorithm and that the solution to the modified problem is also a solution to the
original one. When Dijkstra's algorithm is used, the computational complexity of their algorithm is
Bellman-Ford algorithm is used, the complexity is O(xnm), where x is an adjustable
positive integer whose value determines the performance and overhead of the algorithm. To achieve
a high probability of finding a feasible path, x needs to be as large as 10n, resulting in computational
complexity of O(n 4 ). In [14] Neve and Mieghem used the k-shortest paths algorithm in [9] with a
nonlinear cost function to solve the MCP problem. The resulting algorithm, called TAMCRA, has a
complexity of O(kn log(kn) is the number of constraints. As mentioned above,
the performance and overhead of this algorithm depend on k. If it is large, the algorithm gives good
performance at the expense of excessive computational cost.
Other works in the literature were aimed at addressing special yet important cases of the QoS
routing problem. For example, some researchers focused on an important subset of QoS requirements
(e.g., bandwidth and delay). Showing that the feasibility problem under this combination is not NP-
complete, the authors in [36] presented a bandwidth-delay based routing algorithm that simply prunes
all links that do not satisfy the bandwidth requirement and then finds the shortest path w.r.t. delay in
the reduced graph. Several path selection algorithms based on different combinations of bandwidth,
delay, and hop-count were discussed in [28, 27, 5] (e.g., widest-shortest path, shortest-widest path).
In addition, new algorithms were proposed to find more than one feasible path w.r.t. bandwidth and
delay (e.g., Maximally Disjoint Shortest and Widest Paths (MADSWIP)) [33]. Another approach to
QoS routing is to exploit the dependencies between the QoS parameters and solve the path selection
problem assuming specific scheduling schemes at network routers [27, 30]. Specifically, if Weighted
Fair Queueing (WFQ) scheduling is being used and the constraints are bandwidth, queueing delay,
jitter, and loss, then the problem can be reduced to standard shortest path problem by representing
all the constraints in terms of bandwidth. Although queueing delay can be formulated as a function
of bandwidth, this is not the case for the propagation delay, which is the dominant delay component
in high-speed networks [10].
Contributions and Organization of the Paper
Previously proposed algorithms for the MCP problem suffer from either excessive computational
complexities or low performance in finding feasible paths. In this paper, we provide in Section 3 an
efficient approximation algorithm (the basic algorithm) for the MCP problem under two additive con-
straints. Our algorithm is based on the minimization of the same linear cost function ffw 1 (p)+fiw 2 (p)
used in [24], where in our case we systematically search for the appropriate ff and fi. This formulation
is similar to that used in the Lagrange relaxation technique. However, the Lagrange technique
serves only as a platform, rather than a solution, by formulating constrained optimization problems
as a linear composition of constraints. The solution to the Lagrange problem requires searching for
the appropriate linear composition (Lagrange multipliers); the appropriate values of ff and fi in our
case. Any combinatorial algorithm (heuristic) that has been or will be proposed for linear optimization
problems is a careful refinement of the search for the appropriate multipliers in the Lagrangian
problem. When formulated as a Lagrangian multipliers problem, the search would typically be based
on computationally expensive methods, such as enumeration, linear programming, and subgradient
optimization [1]. Instead, we provide a binary search strategy for finding the appropriate value of k
in the composite function w (p) that is guaranteed to terminate within
a logarithmic number of calls to Dijkstra's algorithm. This fast search is one of the main contributions
in the paper. The algorithm always returns a path p. If p is not feasible, then it has the
following properties: (a) w j (p) - c j , and (b) w i (p) is within a given factor from a feasible path f
for which w i (f) is minimum, where (i; are either (1; 2) or (2; 1). Our basic algorithm performs a
binary search in the range [1; B] by calling a hierarchical version of Dijkstra's algorithm, which is
described in Section 2. Using an efficient implementation of Dijkstra's algorithm with complexity of
log n) [1], the worst-case complexity of our basic algorithm is O(log B(m Its
average complexity is observed to be much less than that. The space complexity is O(n). By proper
interpretation of the bounds in (a) and (b), we also present two extensions to our basic algorithm
in Section 4, which allow us to achieve further improvement in the routing performance at small
extra computational cost. Simulation results, which are provided in Section 5, demonstrate the high
performance of our algorithm and contrast it with other path selection algorithms. Conclusions and
future work are presented in Section 6.
Hierarchical Shortest Path Algorithm
In this section, we describe a hierarchical version of Dijkstra's shortest path algorithm that is used
iteratively in our algorithm with a composite link weight l(e) def
In addition to
finding one of the shortest paths w.r.t. l(e), this hierarchical version determines the minimum w 1 ()
and w 2 () among all shortest paths. To carry out these tasks, some modifications are needed in
the relaxation process of the standard Dijkstra's algorithm (lines 4-14 in Figure 1). The standard
8 else if
9 if min w
14 end if
Figure
1: New relaxation procedure for the hierarchical version of Dijkstra's algorithm.
Dijkstra's algorithm maintains two labels for each node [12]: d[u] to represent the estimated total
cost of the shortest path from the source node s to node u w.r.t. the composed weight l(e), and -[u]
to represent the predecessor of node u along the shortest path. The hierarchical version of Dijkstra's
algorithm maintains additional labels: w 1 [u] and w 2 [u] to represent the cost of the shortest path
w.r.t. the individual weights, and min w 1 [u] and min w 2 [u] to represent the minimum w 1 and w 2
weights among all shortest paths. 1 The standard relaxation process (lines 1-3 in Figure 1) tests
whether the shortest path found so far from s to v can be improved by passing through node u. If so,
d[v] and -[v] are updated [12]. Under this condition, we add the update of w 1 [v], w 2 [v], min w 1 [v],
and min w 2 [v]. In addition, if the cost of the shortest path found so far from node s to node v is the
same as that of the path passing through node u, then min w 1 [v] and min w 2 [v] are also updated if
passing through node u would improve their values.
3 Basic Approximation Algorithm For MCP
Our algorithm, shown in Figure 2, first executes the hierarchical version of Dijkstra's algorithm with
link weights 1. If p is feasible, then the algorithm
BasicApproximation(G(V; E); s; t; c
/* Find a path p from s to t in the network E) */
/* such that w 1
Execute hierarchical Dijkstra's algorithm with link weights
4 return SUCCESS
7 return FAILURE
return FAILURE /* one can use the extensions in Section 4 */
Execute Binary
14 else if min w 1 [t] - c 1 then
Execute Binary
BasicApproximation
Figure
2: Approximation algorithm for finding a feasible path subject to two additive constraints.
1 Notice that w i [:] is a node label, whereas w i (:) indicates the weight of a link or the cost of a path.
terminates. Otherwise, p is not feasible, and several other cases need to be considered. If both
then it is guaranteed that there is no feasible path in the network,
so the algorithm terminates. If both min w 1 [t] - c 1 and min w 2 [t] - c 2 , then there are at least
two paths, say p 1 and p 2 , that have the same cost w.r.t. l() but that violate either c 1 or c 2 (if
vice versa). In this case, changing the value of ff or fi
does not help since the algorithm will always return an infeasible path. To improve performance
in such a case, one can use the extensions presented in Section 4. On the other hand, if either
but not both, then there might be a feasible path that can be
found using different values of ff and fi. The challenge is to determine the appropriate values for ff
and fi as fast as possible such that a feasible path can be identified. Finding the appropriate values
for ff and fi can also be formulated as a Lagrangian multipliers problem. But in this case, finding
the Lagrange multipliers would typically be done using computationally expensive methods (e.g.,
enumeration, linear programming, subgradient optimization technique) [1]. Instead, we carefully
refine the search required by the Lagrangian problem and provide a binary search strategy for ff and
fi that is guaranteed to terminate within a logarithmic number of calls to Dijkstra's algorithm.
If either min w 1 [t] - c 1 or min w 2 [t] - c 2 , then the algorithm executes the binary search presented
in Figure 3 with either 1). These two cases are called Phase 1 and
Binary
l p
6 Execute hierarchical Dijkstra's algorithm with link weights
8 return SUCCESS
9 end if
as a result, k will be increased */
else
as a result, k will be decreased */
14 end if
Binary Search
Figure
3: Binary search for our approximation algorithm.
Phase 2. In Phase 1, the algorithm executes the binary search using link weight
In Phase 2, the algorithm executes the binary search using link weight
k. If the returned shortest path w.r.t. l(e) is not feasible,
the algorithm repeats the hierarchical Dijkstra's algorithm up to a logarithmic number of different
values of k in the range [1; B], where is an upper bound on the cost of the
longest path w.r.t. w j (). Lemma 1 in Section 3.2 shows that a binary search argument in the above
range can be used to determine an appropriate value for k. Furthermore, we show (in Lemma 2) that
if the binary search fails to return a feasible path, then it returns a path p such that w j (p) - c j and
feasible path and (i; j) is either (1; 2) or (2; 1).
This is a reasonble scenario for searching fast for a feasible path that satisfies one of the constraints
and that tries to get closer to satisfying the other constraint. According to this bound, k needs to
be maximized; the above binary search tries to achieve this goal. In addition to maximizing k, the
algorithm may attempt to minimize the difference (w to make the approximation bound
tighter. This is an extension to the basic algorithm that is presented in Section 4.
In the rest of this section, we first illustrate how our algorithm works and contrast it with the
one in [24]. Second, we present the binary search argument with the related lemma and its proof.
Finally, we prove the performance bound associated with our basic approximation algorithm.
3.1 How the Algorithm Works
Figure
4 describes how an approximation algorithm minimizes w 1 (p) +kw 2 (p) by scanning the path-
cost space searching for a feasible path at a given value of k. The shaded area indicates the feasibility
Figure
4: How the approximation algorithm searches the feasible region using different values for k.
region. Black dots represent the costs of different paths from source node s to destination node t.
Each line in the figure shows the equivalence class of equal-cost paths w.r.t. the composed weight.
The approximation algorithm determines a line for the given value of k, and then moves this line
outward from the origin in the direction of the arrow. Whenever this line hits a path (i.e., black dot
in the figure), the algorithm returns this path which is the shortest w.r.t. the composed weight at
the given k. The approximation algorithm in [24] makes a good guess for k (e.g., returns
a path based on this k. However, if this path is infeasible the algorithm in [24] cannot proceed. As
shown in Figure 4, the likelihood of finding a feasible path is much higher if one tries different values
of k (e.g., example results in a feasible path). The advantage of our algorithm over the
one in [24] is that ours searches systematically for a good value for k instead of fixing it in advance.
If the returned path p is not feasible, then the algorithm decides to increase or decrease the value of
k based on whether min w 2 (p) - c 2 or not.
The systematic adjustment of k is illustrated in the examples in Figures 5 and 6 for two different
phases. Figure 5 illustrates Phase 1 where the returned path with . The
(a)
(b)
Figure
5: Searching for a feasible path in Phase 1.
algorithm executes the binary search with returns a feasible path when
as shown in Figure 5(b). Figure 6 illustrates Phase 2 where the returned path with
but not c 2 . In this case, the algorithm executes the binary search with finally
(a)
(b)
Figure
Searching for a feasible path in Phase 2.
returns a feasible path when 4. If the binary search fails, then the basic algorithm stops even
though there might be a feasible path in the network. In Section 4, we illustrate such a case and
provide possible remedies to it based on a scaling extension.
3.2 Binary Search
Lemma 1 Suppose that each link e 2 E is assigned a weight is an
integer, and the pair (i; j) is either (1; 2) or (2; 1), depending on the phase. During the execution of
the binary search, if the algorithm cannot find a path p for which l(p) is minimum and w j (p) - c j ,
then such a path p cannot be found with larger values of k.
Lemma 1 implies that using a binary search, the algorithm can determine an appropriate value for
k. Although in the worst-case this search requires log(n executions of hierarchical
Dijkstra's algorithm, we observed that its average complexity is significantly lower than that.
Proof of Lemma 1: The binary search is applied to finding the largest k such that there exists
a shortest path p w.r.t. . Assume that
integer r. Let P be the set of all paths from s to t w.r.t. l(e) and let p be a path that the algorithm
selects during the binary search. When since every edge e is assigned the weight
f
In order to prove the lemma, it suffices to show that if
then the algorithm should never search for a path p 0 that satisfies the constraint c j by assigning
weights
By explicitly checking min w j [t] in line 10 of Figure 3, the algorithm guarantees that
c j for all shortest paths q 2 P , where it suffices to show that if the algorithm
assigns weights and fails to find a feasible path w.r.t. constraint c j , then no
path p 0 for which X
will satisfy both
and
when the value of k is increased to r + fl. In other words, it is useless to weight with the rule
in order to search for a path p 0 whose
minimum but satisfies the c j constraint.
Since path p violates the c j constraint, in order for path p 0 to satisfy this constraint, we must
Observe that (1) can be rewritten as
From (3) and (2), we have X
Based on (2) and (4), we know that the right-hand side and the left-hand side of (3) are positive.
Thus, it can be implied that
from which we conclude that
This, in turn, implies that p 0 will not be selected by the algorithm.
3.3 Performance Bounds
Lemma 2 If the binary search fails to return a feasible path w.r.t. both constraints, then it returns
a path p that satisfies the constraint c j and whose w i () cost is upper bounded as follows:
where f is a feasible path, k is the maximum value that the binary search determines at termination,
and the pair (i; j) is either (1; 2) or (2; 1), depending on the phase.
Note that the worst-case value for the bound in Lemma 2 is obtained when
which case
For the worst-case scenario (w i place, the feasible path f must lie on the upper
right corner of the feasibility region, with all other paths having w
is a rare scenario; most often, feasible paths are scattered throughout the feasibility region, allowing
the algorithm to terminate with k ? 1, which in turn results in a tighter bound than c
thermore, w j (p) is often greater than zero, further tightening the bound on the cost of the returned
path.
Proof of Lemma 2: Let f be any feasible path. Assume that the returned path p is infeasi-
ble. Since it is the shortest path, we have
In addition, w j (p) - c j . From (7), we can write a bound on w i (p) as follows:
These approximation bounds provide some justification to the appropriateness of the basic algorithm.
They can also be used to obtain heuristic solutions for the MCP problem, as described next.
4 Extensions of the Basic Algorithm
4.1 Finding a Path with the Closest Cost to a Constraint
From Lemma 2, it is clear that one way to improve the performance of the basic algorithm is to
minimize the difference w by obtaining a path p for which w j (p) is as close as possible
to c j . This can be done via the following modification to the basic algorithm of Section 3. Without
loss generality, we assume that 2. Note that this extension is to be used when the
returned path from the basic algorithm is infeasible but min w
For the given k, a DAG (directed acyclic graph) that contains all possible shortest paths w.r.t l(e)
is constructed. In fact, this can be done during the execution of the hierarchical Dijkstra's algorithm
at no extra cost. A path p from this DAG is selected in such a way that w 2 (p) is maximized but
is still less than or equal to c 2 . Although a path p with the maximum or minimum w 2 () cost can
be found in the DAG, it is not easy to find a path p for which w 2 (p) is as close as possible to c 2 in
polynomial-time. However, very efficient heuristics can be developed based on the fact that we can
compute the maximum and the minimum w 2 () from the source to every node and from every node
to the destination. Let the following labels be maintained for each node u: M [u], m[u], f
M [u], and
e
m[u]. Labels M [u] and m[u] indicate, respectively, the maximum and minimum w 2 () from the source
node s to every node u. Labels f
M [u] and e
m[u] indicate, respectively, the maximum and minimum
from every node u to the destination. Labels M [u], m[u], f
M [u], and e
m[u] are determined by
using a simple forward and backward topological traversal algorithm [1]. Considering the pairwise
sum of these labels as follows, we can assign the following non-additive weight oe(u; v) to every link
(u; v) in the DAG, which indicates how close w 2 () of the paths passing through link (u; v):
negative
where min non negative is the minimum nonnegative value. Then, the closest path to c 2 can be found
via a simple graph traversal algorithm as follows. Starting from the source node s, the algorithm
selects the link (s; u) with the minimum oe. It then goes to node u and again selects the link (u; v)
with the minimum oe. The algorithm keeps selecting links with minimum oe until it hits t. Although
this extension does not guarantee finding a feasible path, the following lemma shows that it always
returns a path, i.e., s and t are not disconnected by assigning 1 to some links.
Lemma 3 When the above extension is used, it always returns a path, i.e., s and t are not disconnected
by assigning 1 to some links.
First, note that the basic algorithm always returns a path. If this path is not feasible but
both min w 1 [t] - c 1 and min w 2 [t] - c 2 , then the above extension can be used. Since min w 2 [t] - c 2 ,
there is at least one path, p, with w 2 (p) - c 2 . Assume that p consists of l nodes (v
t. Note that the extension first computes the labels M [u], m[u], f
M [u], and
e
m[u] for each node u. Since w 2 (p) - c 2 , we have
From which we conclude that
Also, 1. Thus, we have oe(v every link (v along the path p. This
ensures that there is at least one path from s to t, i.e., s and t are not disconnected. Of course, if
no feasible path is found under the extension, the algorithm can trivially return the path p itself,
ensuring the connectedness of s and t.
Figure
7 depicts an example of how a DAG of shortest paths is constructed. The original network
is shown in Figure 7(a). Suppose a path p is to be found from s to t such that w 1 (p) - c
Consider the case when There
are three shortest paths from s to t: p 1 =! s;
with 9. For each
of these paths, min w than the respective constraints, so we can apply this
extension. The corresponding DAG that contains all shortest paths w.r.t. is shown in
4,2
5,6(a)
s t2
4,5
(b)
Figure
7: An example of a network and the DAG containing the three shortest paths from s to t.
Figure
7(b). By traversing forward and backward on this DAG, we compute the labels M [u], m[u],
M=6
s t2
(a)
s t2
(b)
Figure
8: Finding the closest path to c 2 .
f
M [u], and e
m[u] (see
Figure
8(a)). After calculating oe for each link as shown in Figure 8(b), the
algorithm first selects link (s; 1), followed by link (1; 2), and finally link (2; t). Thus, the closest path
3 is found. Since this heuristic step tends to minimize the additive difference in the approximation
bound presented in Lemmas 2, the returned path p is more likely to satisfy both c 1 and c 2 .
4.2 Scaling
In some pathological cases, no linear combination of weights can result in returning a feasible path,
despite the existence of such a path. An example of such a case is shown in Figure 9(a). Suppose
that a path p is to be found from s to t such that w 1 (p) - c As shown
in
Figure
9(b), there are three paths from s to t: p 1 =! s;
2. Only p 2 is feasible. The approximation algorithm, say in Phase 1, returns a path based
on the minimization of the composed weight To return the feasible path p 2 ,
the algorithm needs to find an appropriate value for k such that l(p 2 ) is less than both
Hence, the value of k needs to be greater than 7=6 to satisfy (l(p 2
w2(p)2 9
p3
Figure
9: A scenario in which the basic algorithm fails to find a feasible path from s to t.
also less than 8=7 to satisfy (l(p 2 2k). But this is impossible. In other
words, the approximation algorithm cannot find the feasible path p 2 , irrespective of the value of k.
This situation is illustrated in part (b) of the figure.
To circumvent such pathological cases, we provide an extension to our basic algorithm based on
the scaling in [7]. A new weight w 0
2 (e) is assigned to every link in the original graph as follows:
where x is an adjustable positive integer in the range [1; c 2 ]. The problem reduces to finding a path
in the scaled graph such that w 1 (p) - c 1 and w 0
x. It has been shown that a solution in
the scaled graph is also a solution in the original one [7]. If we scale the network in Figure 9(a)
by the scaled graph is shown in Figure 10(a). If the approximation algorithm uses the cost
(b)
p3
Figure
10: Scaling the network in Figure 9 by allows the algorithm to find a feasible path.
function l 2
2 (p) in the scaled graph with return the feasible path p 2
(see
Figure
10(b)), since l(p 2
Using the above scaling function, one may increase the number of shortest paths in the scaled
graph. If we apply our basic approximation algorithm to the scaled graph, the algorithm will consider
more shortest paths (in the scaled graph) in each iteration of the binary search. It is intuitively true
that the algorithm will terminate with a better (i.e., larger) value of k.
It is important to note that in contrast to the algorithm in [7], the value of x does not affect
the complexity of our algorithm. Choosing x as small as possible may increase the number of
shortest paths as desired. However, this also decreases the number of paths for which w 0
i.e., the algorithm may not return a feasible path. The tradeoff between the value of x and the
associated performance improvement after scaling by x is shown in Figure 11. Here, we measure the
performance of the path selection algorithm by the success ratio (SR), which shows how often the
algorithm returns a feasible path [7]:
number of routed connection requests
otal number of connection requests
where a routed connection request is one for which the algorithm returns a feasible path.
Sucsess
ratio
x
Sucsess
ratio
x
Our algorithm with scaling by x
Optimal algorithm
Our algorithm with scaling by x
Optimal algorithm
Figure
11: Performance of the path selection algorithm for different values of the scaling factor x.
When the basic algorithm fails to return a feasible path, we scale the graph using different values of
x and run the algorithm again. The following lemma shows that a binary search argument can be used
to determine an appropriate x in the range [1; c 2 ]. Since the basic algorithm is executed for each value
of x, the overall computational complexity of the scaling extension is O(log c 2 log B(n log n
Note, however, that this extension is used only after the basic step with no scaling fails.
Lemma 4 If the algorithm cannot find a path p for which w 0
in the scaled graph by x, then
such a path cannot be found in a graph that is scaled by x
Proof of Lemma 4: Let the graph G be scaled by x = 2r for some integer r, and let P be
the set of all possible paths in the scaled graph. If the algorithm fails to return a path p for which
l w 2 (e)\Delta2r
In order to prove the lemma, it suffices to show that if (9) is true, then the algorithm should never
search for a path p 0 for which w 0
l w 2 (e)\Deltar
r when the links of the graph are scaled
down by
we can rewrite
from which we conclude X
This, in turn, implies that no path will be selected by the algorithm, and the claim is true.
5 Simulation Results and Discussion
In this section, we contrast the performance of our basic algorithm with Jaffe's second approximation
algorithm [24], Chen's heuristic algorithm in [7], and the first ffl-optimal algorithm in [21]. In [24] Jaffe
presents two approximation algorithms for the MCP problem based on the minimization of w
in the first algorithm and
in the second. Of the two approximations,
the latter one provides better performance, and hence it will be used in our comparisons. As a point
of reference, we also report the results of the exact (exponential-time) algorithm, which considers all
possible paths in the graph to determine whether there is a feasible path or not. The performance
has been measured for various network topologies. For brevity, we report the results for one of these
topologies under both homogeneous and heterogeneous links.
5.1 Simulation Model and Performance Measures
In our simulation model, a network is given as a directed graph. Link weights, the source and
destination of a connection request, and the constraints c 1 and c 2 are all randomly generated. We
use the success ratio (SR) to contrast the performance of various path selection algorithms. Another
important performance aspect is the computational complexity. In here, we measure the complexity
of path selection algorithms by the number of performed Dijkstra's iterations. While the algorithm
in [24] requires only one iteration, the algorithm in [7] always requires x 2 iterations, where x is
an adjustable positive integer. The number of iterations in our basic algorithm varies in the range
log B], where B is the upper bound on the longest path according to one of the link weights. For our
algorithm, the average number of Dijkstra's iterations (ANDI) per connection request is measured
and compared with the deterministic number of Dijkstra's iterations in the other algorithms. It
should be noted that our algorithm runs at its worst-case complexity only if it is deemed to fail, i.e.,
if the algorithm succeeds in finding a path, it will do so with much fewer Dijkstra's iterations than
log B. This is confirmed in the simulation results.
5.2 Results Under Homogeneous Link Weights
We consider the network topology in Figure 12, which has been modified from ANSNET [11] by
inserting additional links. Link weights are randomly selected with w 1 (u; v) - uniform[0; 50] and
200]. The same network topology, link weights, and constraints were used
in [7]. For different ranges of c 1 and c 2 , Table 1 shows the SR of various algorithms based on twenty0101010101010101001101010101010101010101010101001111010101010101 010101000000000000000000000000000000000011111111111111111111111111111111110000000000000000000000000011111111111110000000001111111110000001111000000000000111111111000000000000000000111111111111111111000000001111110000000011111111000000000000000111111111111111
28 29272221
Figure
12: An irregular network topology.
runs; each run is based on 2000 randomly generated connection requests. For our algorithm, the
Range of c 1 and c 2 Exact Our Alg. Jaffe's Chen's ffl-optimal
Table
1: SR performance of several path selection algorithms (homogeneous case).
ANDI for each range in Table 1 is given by 2.49, 2.63, 2.23, 1.61, and 1.21, respectively. The number of
feasible paths, and thus the SR, increases as the constraints gets looser in the table. As this happens,
the ANDI in our algorithm decreases. The overall average complexity per connection request is about
two iterations of Dijkstra's algorithm. In terms of SR, our algorithm performs almost as good as
the exact one. The results show that our algorithm provides significantly superior performance to
Jaffe's approximation algorithm. To compare our algorithm with Chen's heuristic algorithm [7] and
the ffl-optimal algorithm [21], we need to properly set the values of x and ffl. In theory, as x goes
to infinity and as ffl goes to 0, the performances of the corresponding algorithms approach that of
the exact one. However, since the complexities of these algorithms depend on x and ffl, large values
for x and small values for ffl clearly make the corresponding algorithms impractical. To get as close
as possible to achieving about the same average computational complexity of our algorithm, we set
the performance of Chen's algorithm lags significantly behind
ours. Even if we increase x to ten, making the computational requirement of Chen's algorithm many
times that of our algorithm, its performance still lags behind ours. When the ffl-optimal
algorithm has roughly the same average complexity as ours; but with ours having a 50% higher
SR. The ffl-optimal algorithm uses a dynamic-programming approach that maintains a scaled cost
array with size of (n=ffl) at each node and it can determine paths whose scaled cost is less than
(n=ffl). When the values of constraints are increased, more longer paths becomes feasible, but the
ffl-optimal algorithm cannot determine them unless ffl gets very small. For example, the performance
of the ffl-optimal algorithm becomes close to that of ours if ffl is set to 1. But in this case, the
complexity of the ffl-optimal algorithm is about ten times that of ours. (The complexity of the ffl-
optimal algorithm is O(log log B(mn log log B)), compared to an average of about two iterations
of Dijkstra's algorithm for our algorithm. The complexity of Dijkstra's algorithm is O(n log n +m).
in the underlying
network, we have 2(n log n m) - 10% of log log B(mn log log B).)
In the above simulations, the two constraints are almost equally tight (i.e., E[w 1
have comparable values). We now examine the case when one constraint is much
tighter than the other. We use the same network and parameter ranges as before, except for c 1 whose
upper and lower limits are now set to 1=5 of their original values. The SRs of various algorithms
are shown in Table 2. Since the first constraint is now tighter than before, the SR values for all
algorithms, including the exact one, are smaller. Nonetheless, the same previously observed relative
performance trends among different algorithms in Table 1 are also observed here. Note that by
making one constraint much tighter than the other, the problem almost reduces to that of finding
the shortest path w.r.t. the tighter constraint. By dynamically changing the value of k, our algorithm
can adapt to the tightness of this constraint by giving it more weight (through k).
The above discussion simply says that relative to the exact algorithm, the performance of our
approximation algorithm does not change significantly by making one constraint tighter than the
other provided that the links are homogeneous. However, this is not the case when the links are
heterogeneous, as demonstrated in the next section.
Range of c 1 and c 2 Exact Our Alg Jaffe's Chen's ffl-optimal
Table
2: SR performance of several path selection algorithms when the first constraint is much tighter
than the second (homogeneous case).
5.3 Performance Under Heterogeneous Links
The heterogeneity of links in a network may severely impact the performance of a path selection
algorithm. Hence, before drawing any general conclusions on the merits of our algorithm, we need
to examine its performance in a network with heterogeneous links. For this purpose, we start with
the same network topology in Figure 12. We then divide the network into three parts, as shown in
Figure
13. The link weights w 1 and w 2 are determined as follows: if u is a node that belongs to0101010101010101001101010101010101010101010101001111010101010101 010101000000000000000000000000000000001111111111111111111111111111111100000000000000000000000011111111111111111111111100000000011111111100000011111100000000000011111111100000000000000000011111111111111111100000000111111110000000011111111000000000000000111111111111111
28 29272221
Figure
13: Network topology with heterogeneous link weights.
the upper part of the network, then w 1 (u; v) - uniform[70; 85] and w 2 (u; v) - uniform[1; 5]; if it
belongs to the middle part, then w 1 (u; v) - uniform[45; 55] and w 2 (u; v) - uniform[45; 55]; and
if it belongs to the lower part, then w 1 (u; v) - uniform[1; 5] and w 2 (u; v) - uniform[70; 85]. The
source node is randomly chosen from nodes 1 to 5. The destination node is randomly chosen from
nodes 22 to 30.
For different ranges of c 1 and c 2 , Table 3 shows the SR of various algorithms based on twenty runs;
each run is based on 2000 randomly generated connection requests. For our algorithm, the ANDI
Range of c 1 and c 2 Exact Ours Jaffe's Chen's
Table
3: SR performance of several path selection algorithms (heterogeneous case).
for the five ranges of c 1 and c 2 in Table 3 are given by 4.03, 4.59, 4.55, 4.52, and 2.75, respectively.
It can be observed that in this case, as the constraints become looser, the difference between the
SR of any of the tested algorithms and the SR of the exact algorithm increases significantly (see
the fifth row in the table). One can attribute this performance degradation to the linearity of the
cost functions used in these algorithms, which favors links with homogeneous characteristics. Our
algorithm still provides better performance than Jaffe's approximation algorithm. To achieve about
the same average computational complexity of our algorithm, we set x = 3 in Chen's algorithm and
in the ffl-optimal algorithm. With these values, the SRs of these algorithms lag behind is
observed to lag behind ours.
6 Conclusions and Future Work
QoS-based routing subject to multiple additive constraints is an NP-complete problem that cannot be
exactly solved in polynomial time. To address this problem, we presented an efficient approximation
algorithm using a binary search strategy. Our algorithm is supported by performance bounds that
reflect the effectiveness of the algorithm in finding a feasible path. We studied the performance of
the algorithm via simulations under both homogeneous and heterogeneous link weights. Our results
show that at the same level of computational complexity, the proposed algorithm outperforms existing
ones in its performance. We also presented two extensions to our basic algorithm that can be used
to further improve its performance at little extra computational cost. The first extension, which
is motivated by the presented theoretical bounds, attempts to find the closest feasible path to a
constraint. The other extension (i.e., scaling) improves the likelihood of finding a feasible path by
perturbing the linearity of the search process (or equivalently, changing the relative locations of the
paths in the parameter space). Our basic approximation algorithm runs a hierarchical version of
Dijkstra's algorithm up to log B times, where B is an upper bound on the longest path w.r.t. one
of link weights. Specifically,
Phase 2. When scaling is used, the algorithm runs Dijkstra's algorithm up to log c 2 log B times.
These worst-case complexities are rarely used in practice. In fact, simulation results indicate much
lower average complexities. The space complexity of our algorithm is O(n).
The path selection problem has been investigated in this paper assuming a flat network topology
and complete knowledge of the network state. In practice, the true state of the network is not
available to every source node at all times due to network dynamics, aggregation of state information
(in hierarchical networks), and latencies in the dissemination of state information. Our future work
will focus on investigating the MCP problem in the presence of inaccurate state information and
evaluating the tradeoffs among accurate path selection, topology aggregation (for spatial scalability),
and the frequency of advertisements (for temporal scalability). Another aspect that we plan to
investigate is that of renegotiation. When our algorithm fails to return a feasible path, it always
returns a path which is close to satisfying the given constraints. Hence, we plan to investigate how
such a path can be advantageously used in the renegotiation process to achieve further performance
improvements.
--R
Network Flows: Theory
ATM internetworking.
Shortest chain subject to side constraints.
QoS routing mechanisms and OSPF extensions.
Quality of service based routing: A performance perspective.
An approximation algorithm for combinatorial optimization problems with two parameters.
On finding multi-constrained paths
An overview of quality-of-service routing for the next generation high-speed networks: Problems and solutions
Strategic directions in networks and telecommunications.
Internetworking with TCP/IP
Introduction to Algorithms.
A framework for QoS-based routing in the Internet
A multiple quality of service routing algorithm for PNNI.
Finding the k shortest paths.
the ATM Forum.
Computers and Intractability
Search space reduction in QoS routing.
A dual algorithm for the constrained shortest path problem.
Approximation schemes for the restricted shortest path problem.
A delay-constrained least-cost path routing protocol and the synthesis method
ATM routing algorithms with multiple QOS requirements for multimedia inter- networking
Algorithms for finding paths with multiple constraints.
QoS based routing for integrated multimedia services.
Routing subject to quality of service constraints in integrated communication networks.
On path selection for traffic with bandwidth guarantees.
Routing traffic with quality-of-service guarantees in integrated services networks
Routing with end-to-end QoS guarantees in broadband networks
QoS based routing algorithm in integrated services packet networks.
A distributed algorithm for delay-constrained unicast routing
Solving k-shortest and constrained shortest path problems efficiently
On the complexity of quality of service routing.
the design and evaluation of routing algorithms for real-time channels
Internet QoS: a big picture.
A new distributed routing algorithm for supporting delay-sensitive applications
--TR
Solving <italic>k</>-shortest and constrained shortest path problems efficiently
Introduction to algorithms
Internetworking with TCP/IP (2nd ed.), vol. I
Network flows
Approximation schemes for the restricted shortest path problem
Strategic directions in networks and telecommunications
Quality of service based routing
On the complexity of quality of service routing
QoS routing in networks with inaccurate information
Routing with end-to-end QoS guarantees in broadband networks
Computers and Intractability
A Delay-Constrained Least-Cost Path Routing Protocol and the Synthesis Method
A Distributed Algorithm for Delay-Constrained Unicast Routing
QoS based routing algorithm in integrated services packet networks
On path selection for traffic with bandwidth guarantees
An Approximation Algorithm for Combinatorial Optimization Problems with Two Parameters
Search Space Reduction in QoS Routing
--CTR
Anthony Stentz, CD*: a real-time resolution optimal re-planner for globally constrained problems, Eighteenth national conference on Artificial intelligence, p.605-611, July 28-August 01, 2002, Edmonton, Alberta, Canada
Gang Cheng , Nirwan Ansari, Rate-distortion based link state update, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.17, p.3300-3314, 5 December 2006
Xin Yuan, Heuristic algorithms for multiconstrained quality-of-service routing, IEEE/ACM Transactions on Networking (TON), v.10 n.2, April 2002
Zhenjiang Li , J. J. Garcia-Luna-Aceves, A distributed approach for multi-constrained path selection and routing optimization, Proceedings of the 3rd international conference on Quality of service in heterogeneous wired/wireless networks, August 07-09, 2006, Waterloo, Ontario, Canada
Zhenjiang Li , J. J. Garcia-Luna-Aceves, Finding multi-constrained feasible paths by using depth-first search, Wireless Networks, v.13 n.3, p.323-334, June 2007
Gargi Banerjee , Deepinder Sidhu, Comparative analysis of path computation techniques for MPLS traffic engineering, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.40 n.1, p.149-165, September 2002
Andrea Fumagalli , Marco Tacca, Differentiated reliability (DiR) in wavelength division multiplexing rings, IEEE/ACM Transactions on Networking (TON), v.14 n.1, p.159-168, February 2006
Wei Liu , Wenjing Lou , Yuguang Fang, An efficient quality of service routing algorithm for delay-sensitive applications, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.47 n.1, p.87-104, 14 January 2005
Zhenjiang Li , J. J. Garcia-Luna-Aceves, Loop-free constrained path computation for hop-by-hop QoS routing, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.11, p.3278-3293, August, 2007
Turgay Korkmaz , Marwan Krunz, Bandwidth-delay constrained path selection under inaccurate state information, IEEE/ACM Transactions on Networking (TON), v.11 n.3, p.384-398, June | scalable routing;multiple constrained path selection;QoS routing |
339659 | Energy-driven integrated hardware-software optimizations using SimplePower. | With the emergence of a plethora of embedded and portable applications, energy dissipation has joined throughput, area, and accuracy/precision as a major design constraint. Thus, designers must be concerned with both optimizing and estimating the energy consumption of circuits, architectures, and software. Most of the research in energy optimization and/or estimation has focused on single components of the system and has not looked across the interacting spectrum of the hardware and software. The novelty of our new energy estimation framework, SimplePower, is that it evaluates the energy considering the system as a whole rather than just as a sum of parts, and that it concurrently supports both compiler and architectural experimentation. We present the design and use of the SimplePower framework that includes a transition-sensitive, cycle-accurate datapath energy model that interfaces with analytical and transition sensitive energy models for the memory and bus subsystems, respectively. We analyzed the energy consumption of ten codes from the multidimensional array domain, a domain that is important for embedded video and signal processing systems, after applying different compiler and architectural optimizations. Our experiments demonstrate that early estimates from the SimplePower energy estimation framework can help identify the system energy hotspots and enable architects and compiler designers to focus their efforts on these areas. | INTRODUCTION
With more than 95% of current microprocessors going into
embedded systems, the need for low power design has become
vital. Even in environments not limited by battery
life, power has become a major constraint due to concerns
about circuit reliability and packaging costs. The increasing
need for low power systems has motivated a large body
of research on low power processors. Most of this research,
however, focuses on reducing the energy 1 in isolated sub-systems
(e.g. the processor core, the on-chip memory, etc.)
rather than the system as a whole [7]. The focus of our re-search
is to provide insight into the energy hotspots in the
system and to evaluate the implications of applying a combination
of architectural and software optimizations on the
overall energy consumption.
In order to perform this research, architectural-level power
estimation tools that provide a fast evaluation of the energy
impact of various optimizations early in the design cycle are
essential [2]. However, only prototype research tools and
methodologies exist to support such high-level estimation.
In this paper, we present the design of an architectural-level
energy estimation framework, SimplePower. To our knowl-
edge, this is the first framework with a capability to evaluate
the integrated impact of hardware and software optimizations
on the overall system energy. In contrast to coarse
grain current measurement-based techniques [26; 17], our
new tool is cycle-accurate, and provides a fine-grained energy
consumption estimate of the processor core (currently a
five-stage pipelined instruction set architecture (ISA)) while
also accounting for the energy consumed by the memory and
bus subsystems. SimplePower also leverages from the SimpleScalar
toolset [3] as it is executes the integer subset of
SimpleScalar ISA.
The memory subsystem is the dominant source of power
dissipation in various video and signal processing embedded
systems [6]. Existing low power work has focused on addressing
this problem through the design of energy efficient
memory architectures and power-aware software [12; 25; 23].
However, most of these efforts do not study the influence on
the energy consumption of the other system components and
even fewer consider the integrated impact of the hardware
and software optimizations. It is important to evaluate the
influence of optimizations on the overall system energy savings
and the power distribution across different components
of the system. Such a study can help identify the changes
1 The dynamic energy consumed by CMOS circuits is given
by is the switching activity on the lines,
C is the capacitive load and V is the supply voltage. We do
not consider the impact of leakage power.
in system energy hotspots and enable the architects and
compiler designers to focus their efforts on addressing these
areas.
This study embarks on this ambitious goal, specifically trying
to answer the following questions:
ffl What is the energy consumed across the different parts of
the system? Is it possible to evaluate this energy distribution
in a fast and accurate fashion for different applications?
ffl What is the effect of the state-of-the-art performance-oriented
compiler optimizations on the overall system energy
consumption and on each individual system compo-
nent? Does the application of these optimizations cause a
change in the energy hotspot of the system?
ffl What is the impact of power and performance-oriented
memory system modifications on the energy consumption?
How do compiler optimizations influence the effectiveness of
these modifications?
ffl What is the impact of advances in process technology on
the energy breakdown of the system? Can emerging new
technologies (e.g., embedded DRAMs [22]) result in major
paradigm shifts in the focus of architects and compiler writers
To our knowledge, there has been no prior effort that has
extensively studied all these issues in a unified framework
for the entire system. This paper sets out to answer some
of the above questions using codes drawn from the multi-dimensional
array domain, a domain that is important for
signal and video processing embedded systems.
The rest of this paper is organized as follows. The next
section presents the design of our energy estimation frame-
work, SimplePower. Section 3 presents the distribution of
energy across the different system components using a set
of benchmark codes. The influence of performance-oriented
compiler optimizations on system energy is examined in Section
4. Section 5 investigates the influence of energy-efficient
cache architectures on system power. Section 6 studies the
implications of emerging memory technologies on system en-
ergy. Finally, Section 7 summarizes the contributions of this
work and outlines directions for future research.
2. SIMPLEPOWER:ANENERGYESTIMA-
TION FRAMEWORK
Answering the questions posed in Section 1 requires tools
that allow the architect and compiler writer to estimate the
energy consumed by the system. The energy estimation
framework that we have developed for this purpose, Sim-
plePower, is depicted in Figure 1. For the purposes of this
work, we are using a system consisting of the processor core,
on-chip instruction and data caches, off-chip memory, and
the interconnect buses between the core and the caches and
between the caches and the off-chip memory. What we need
in our framework are tools that allow us to estimate the
energy consumed by each of the modules in the system.
Analytical models for memory components have been used
successfully by several researchers [13; 25] to study the power
Code
Energy
Object file
Energy
I/O Pads
Memory Bus
Optimization Module
Energy Energy
Energy Statistics
SimplePower
SimplePower
Executables
SimpleScalar
Assembly
Main
Memory
Icache Dcache
Cache/Bus Simulator
Power Estimation Interface
5.0V
2.0u 0.8u
Tables
SimplePower
SimpleScalar
GCC
SimpleScalar
GLD
GAS
SimpleScalar
Optimizations
RT Level
Low Level
Compiler
Optimizations
Compiler
High Level
Optimizations
Output
Module
Figure
1: SimplePower energy estimation frame-
work. It consists of the compilation framework and
the energy simulator that captures the energy consumed
by a five-stage pipeline instruction set archi-
tecture, the memory system and the buses.
tradeoffs of different cache/memory configurations. These
models attempt to capture analytically the energy consumed
by the memory address decoder(s), the memory core, the
read/write circuitry, sense amplifiers, and cache tag match
logic. Some of these models can also accommodate low
power cache and memory optimizations such as cache block
buffering [13], cache subbanking [25; 13], bit-line segmentation
[12], etc. These analytical models estimate the energy
consumed per access, but do not accommodate the energy
differences found in sequences of accesses. For example,
since energy consumption is impacted by switching activ-
ity, two sequential memory accesses may exhibit different
address decoder energy consumption. However, simple analytical
energy models for memories have proved to be quite
reliable [13]. This is the approach used in SimplePower to
estimate the energy consumed in the memories.
The energy consumption of the buses depends on the switching
activity on the bus lines and the interconnect capacitance
of the bus lines (with off-chip buses having much larger
capacitive loads than on-chip buses). When the switching
activity is captured by the energy model, we refer to the
technique as a transition-sensitive approach (in contrast to,
for example, the analytical model used for the memory sub-
system). The energy model used by SimplePower for system
buses is transition-sensitive. A wide variety of techniques
have been proposed to reduce system level interconnect energy
ranging from circuit level optimization such as using
low-swing or charge recovery buses, to architectural level
optimizations such as using segmented buses, to algorithmic
level optimizations such as using signal encoding (en-
coding the data in such a way as to reduce the switching
activity on the buses) [11]. As technology scales into the
deep sub-micron, chip sizes grow, and multiprocessor chip
architectures become the norm, system level interconnect
structures will account for a larger and larger portion of the
chip energy and delay. In this paper, we include the energy
consumed in the buses in the memory system energy, unless
specified otherwise.
The final system module to be considered is the processor
core. To support the architecture and compiler optimization
research posed in Section 1, the energy estimation of the
core must be transition-sensitive. At this point in the design
process, in order to support "what-if" experimentation,
the processor core is specified only at the architectural-level
(RTL level). However, without the structural capacitance
information that is part of a gate-level design description
(obtained via time consuming logic synthesis) and the inter-connect
capacitance information that is part of a physical-level
design description (obtained via very time consuming
VLSI design), it is difficult obtain the capacitance values
needed to estimate energy consumption. SimplePower solves
this dilemma by using predefined, transition-sensitive models
for each functional unit to estimate the energy consumption
of the datapath. This approach was first proposed by
Mehta, Irwin and Owens [20]. These transition-sensitive
models contain switch capacitances for a functional unit for
each input transition obtained from VLSI layouts and extensive
simulation. Once the functional unit models
have been built, they can be reused for many different
architectural configurations. SimplePower is, at this time,
only capturing the energy consumed by the core's datapath.
Developing transition-sensitive models for the control path
would be extremely difficult. One way to model control path
power would be analytically. In any case, for the SimplePower
processor core, the energy consumed by the datapath
is much larger than the energy consumed by the control logic
due to the relatively simple control logic. The architecture
simulated by SimplePower in this paper is the integer ISA
of SimpleScalar, a five stage RISC pipeline. Functional unit
energy models (for 2:0-; 0:8-, and 0:35- technology) have
been developed for various units including flip-flops, adders,
register files, multipliers, ALUs, barrel shifters, multiplex-
ors, and decoders.
SimplePower outputs the energy consumed from one execution
cycle to the next. It mines the transition sensitive
energy models provided for each functional unit and sums
them to estimate the energy consumed by each instruction
cycle. The size of these energy tables could, however, become
very large as the number of inputs to the bit-dependent
functional units increase (for units like registers, each bit positions
switching activity is independent and thus one small
table characterizing one flip-flop is sufficient). To solve the
table size problem, we partition the functional units into
smaller sub-modules. For example, a register file is partitioned
into five major sub-modules: five 5:32 decoders,
word-line drivers, write data drivers, read sense-amplifiers,
and a 32 \Theta 32 cell array. Energy tables were constructed for
each submodule. For example, a 1,024 entry table indexed
by the pair of five current and five previous register select
address bits was developed for the register file decoder com-
ponent. This table is then shared by all the five decoders
in the register file. Since the write data drivers, read sense
amplifiers, word line dividers and all array cells are all bit-
independent submodules, their energy tables are quite small.
For the 32 \Theta 32 5-port register file, our power estimation
approach took much less than 0.1 seconds for each input
transition as opposed to the 556.42 seconds required for circuit
level simulation using HSPICE. The machine running
the HSPICE simulation and our simulator is a Sun Ultra-10
with 640 MBytes memory. Our transition-sensitive modeling
approach has been validated to be accurate (average
error rate of 8.98%) using actual current measurements of a
commercial DSP architecture [9].
As mentioned earlier, SimplePower currently uses a combination
of analytical and transition-sensitive energy models
for the memory system. The overall energy of the the memory
system is given by
The energy consumed by the instruction cache (Icache),
and by the data cache (Dcache), EDcache , is evaluated
using an analytical model that has been validated to
be accurate (within 2.4% error) for conventional cache systems
[13; 24]. We extended this model to consider the energy
consumed during writes as well and have also parameterized
the cache models to capture different architectural optimiza-
tions. EBuses includes the energy consumed in the address
and data buses between the Icache/Dcache and the datap-
ath. It is evaluated by monitoring the switching activity on
each of the bus lines assuming a capacitive load of 0.5pF
per line. The energy consumed by the I/O pads and the
external buses to the main memory from the caches, EPads ,
is evaluated similarly for a capacitive load of 20pF per line.
The main memory energy, EMM , is based on the model in
[24] and assumes a per main memory access energy (refered
as Em in the rest of the paper) of 4:95\Theta10 \Gamma9 J based on the
data for the Cypress CY7C1326-133 SRAM chip.
While the SimplePower framework models the influence of
the clock on all components of the architecture (i.e., it assumes
clock gating is implemented), it does not capture the
energy consumed by the clock generation and clock distribution
network. Existing clock energy estimation models
[19; 8] require clock loading and physical dimensions of the
design that can be obtained only after physical design and
are difficult to estimate in the absence of structural infor-
mation. However, we realize that this is an important additional
component of the system energy consumption and
we plan to address this in future research.
3. ENERGY DISTRIBUTION
With the emergence of energy consumption as a critical constraint
in system design, it is essential to identify the energy
hotspots of the system early in the design cycle. There has
been significant work on estimating and optimizing the system
power [7]. However, many have focused on estimat-
ing/optimizing only specific components of the system and
most do not capture the integrated impact of circuit, architectural
and software optimizations. Further, most existing
high-level RTL energy estimation techniques provide a
coarse grain of measurement resulting in 20-40% error relative
to that of a transistor level estimator [2]. By contrast,
SimplePower provides an integrated, cycle-accurate energy
estimation mechanism that captures the energy consumed
Program Source # of Arrays Input Size (KB) Instruction Count Dcache Miss Rates
dtdtz (aps) Perfect Club 17 1,605 42,119,337 0.135
bmcm (wss) Perfect Club 11 126 89,539,244 0.105
psmoo (tfs) Perfect
eflux (tfs) Perfect Club 5 297 12,856,306 0.114
amhmtm (wss) Perfect
Table
1: Programs used in the experiments. Dcache miss rates are for 1K direct mapped caches with
line sizes. Instruction count is dynamic instruction count.1030507090%
Energy
Consumed
in
Components
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Register File
Pipeline Registers
Arithmetic Units
Energy
Consumed
in
Pipeline
Stages
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Fetch
Decode
Execute
Memory
WriteBack
Figure
2: Energy distribution (%) of (a) the major energy consuming data path components and (b) the
pipeline stages. The memory pipeline stage energy consumption does not include that of the ICache and
DCache.2060100%
Energy
Consumed
in
Memory
System
Components
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Buses
Icache
Dcache
I/O Pads
Imemory
Dmemory2060100%
Energy
Consumed
in
Memory
System
Components
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Buses
Icache
Dcache
I/O Pads
Imemory
Dmemory
Figure
3: Energy distribution (%) in memory system: (a) 1K 4way Dcache and (b) 8K 4way Dcache.
Imemory and Dmemory are the energies consumed in accessing the main memory for instructions and data,
respectively.
Original (Unoptimized) Optimized
Memory System Energy Datapath Memory System Energy
Energy (mJ) Energy (mJ)
1-way 2-way 4-way 8-way (mJ) # ! 1-way 2-way 4-way 8-way
tomcatv 3.9 4K 75.8 171.1 172.7 175.4 4.2 4K 52.3 34.6 34.9 35.4
btrix 30.2 4K 1,023.2 513.6 432.6 371.5 28.8 4K 1,093.0 718.8 641.1 593.2
mxm 34.3 4K 1,123.9 522.7 405.3 267.5 83.7 4K 342.0 173.7 159.5 174.6
8K 1,059.3 377.8 240.6 245.0 8K 300.5 192.6 196.4 199.6
vpenta 1.6 4K 109.6 112.9 113.6 113.8 1.9 4K 77.2 60.6 58.9 59.0
8K 78.4 80.9 82.7 87.4 8K 66.2 57.8 57.4 57.6
adi 4.3 4K 166.9 136.7 136.4 136.9 5.2 4K 90.5 77.3 83.1 77.0
1K 2,149.3 1,402.8 1,146.1 1,064.8 1K 1,927.4 823.1 516.2 431.6
dtdtz 27.5 4K 880.5 857.7 815.1 861.2 31.2 4K 428.1 180.2 180.1 136.8
bmcm 59.2 4K 1,007.6 654.2 536.1 385.9 90.0 4K 724.4 416.8 289.8 227.5
psmoo 11.6 4K 341.9 340.7 328.9 343.9 16.1 4K 125.7 102.4 88.2 89.6
341.3 267.5 267.7 269.2 8K 91.8 81.1 81.5 84.1
eflux 8.6 4K 383.0 364.9 368.6 379.6 10.1 4K 226.1 192.9 192.7 193.5
amhmtm 59.8 4K 623.7 271.1 259.9 265.1 66.5 4K 748.9 303.3 287.0 300.0
8K 578.3 308.4 301.9 309.8 8K 551.3 217.5 232.7 265.3
447.2 403.7 403.2 411.8 16K 368.2 290.5 287.6 297.6
Table
2: Energy consumption for various Dcache configurations. For all the cases, an 8K direct-mapped
Icache, line sizes, writeback policy and a core based on 0:8-, 3:3V technology are used.
in the different components of the system.
In this section, we present the energy characteristics of ten
benchmark codes written in the C language 2 (shown in Table
1) from the multidimensional array domain. An important
characteristic of these codes is that they access large
arrays using nested loops. The applications run on energy-constrained
signal and video embedded processing systems
exhibit similar characteristics. Since SimplePower currently
works only with integer data types, floating point data accessed
by these codes were converted to operate on integer
data. In particular, memory access patterns (in terms of
temporal and spatial locality) do not change. In order to
limit the simulation times we scaled down the input sizes;
however, all the benchmarks were run to completion. The
experimental cache sizes (1K-16K) used in our study are
relatively small as our focus is on resource-constrained embedded
systems.
The energy consumed by the system is divided into two
parts: datapath energy and memory system energy. The ma-
Original codes are in Fortran and were converted into C
by paying particular attention to the original data access
patterns.
jor energy consuming components of the datapath are the
register file, pipelined registers, the functional units (e.g.
ALU, multiplier, divider), and datapath multiplexers. The
memory system energy includes the energy consumed by the
Icache and Dcache, the address and data buses, the address
and data pads and the off-chip main memory. Table 2 provides
the energy consumption (in mJ) of our benchmarks for
the datapath and memory system for various Dcache con-
figurations. For all the cases in this paper, an 8K direct
mapped Icache, line sizes of 32 bytes (for both Dcache and
Icache), writeback cache policy, and a core based on 0:8-,
3.3V technology were used. We also present only a single
datapath energy value for the different configurations due
to the efficient stall power reduction techniques (e.g., clock
gating on the pipeline registers) employed in the datapath.
With the aggressive clock gating assumed by SimplePower,
the energy consumed during stall cycles was observed to be
insignificant for our simulations. For example, tomcatv expends
a maximum of 1% of the total datapath energy on
stalls for all cache configurations studied.
We observe that the datapath energy consumption ranges
from 1.577mJ to 59.776mJ for the various codes determined
by the dynamic instruction length and the switching ac-
Energy
Consumed
in
Components
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Register File
Pipeline Registers
Arithmetic Units
Energy
Consumed
in
Pipeline
Stages
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Fetch
Decode
Execute
Memory
WriteBack
Figure
4: Energy distribution (%) of (a) the major energy consuming data path components and (b) the
pipeline stages after applying code transformations. The memory pipeline stage energy consumption does
not include that of the ICache and DCache.
tivity in the datapath. Compared to the memory system
energy, the datapath energy is an order or two smaller in
magnitude. This result corroborates the need for extensive
research on optimizing the memory system power [25; 6; 13;
7]. Next, we zoom-in on the major energy consuming components
of the datapath. It is observed from Figure 2(a)
that the pipeline registers and register file form the energy
hotspots in the datapath contributing 58-70% of the overall
datapath energy. The extensive use of pipelining in DSP
data paths to improve performance [1] and facilitate other
circuit optimization such as voltage scaling will exacerbate
the pipeline register energy consumption. Also, larger and
multiple-port register files required to support multiple issue
machines will increase the register file energy consumption
further. The core energy distribution is also found to be
relatively independent of the codes being analyzed. This is
undoubtably impacted by only simulating integer data op-
erations. The energy consumed by each stage of the pipeline
is calculated by SimplePower and is shown in Figure 2(b).
The decode stage energy does not include control logic energy
consumption since it is not modeled by SimplePower.
The pipeline register is the main contributer to the energy
consumed in the memory stage, since ICache and DCache
energy consumption is not included. The execution stage
of the pipeline that contains the arithmetic units is the major
energy consumer in the entire datapath, since the register
file energy consumption is split between the decode and
writeback stages.
The memory system energy consumption generally reduces
with decrease in capacity and conflict misses when the DCache
size or associativity is increased (see Table 2). Yet, in thirty
seven out of the fifty cases, when we move from a 4way to
8way DCache, the memory system energy consumption in-
creases. A similar trend is observed in fifteen out of forty
cases when we move from an 8K to 16K Dcache. Moving
to a larger cache size or higher associtivities increases the
energy consumption per access. However, for many cases,
this per access cost is amortized by the energy reduction
due to a fewer number of accesses to the main memory.
Of course, if the numbers of misses/hits are equal, using
a less sophisticated cache leads to lower energy consumption
Figure
3(a) shows the energy distribution in the memory
system components for a 1K 4way Dcache configuration
where the main memory energy consumption dominates
due to the large number of Dcache misses. For btrix and
amhmtm, the data accesses per instruction are the smallest.
In amhmtm, the majority of instruction accesses are satisfied
from the Icache resulting in a more significant Icache energy
consumption, whereas btrix exhibits a relatively poor instruction
cache locality (the number of Icache misses is 100
times more than the next significant benchmark) resulting
in increased energy consumption in main memory. When we
increase the data cache size, the majority of data accesses
are satified from the data cache. Hence, the overall contribution
of the Icache and Dcache becomes more significant
as observed from Figure 3(b).
SimplePower provides a comprehensive framework for identifying
the energy hotspots in the system and helps the hardware
and software designers focus on addressing these bot-
tlenecks. The rest of this paper evaluates software and architectural
optimizations targeted at addressing the energy
hotspot of the system, namely, the energy consumed in data
accesses.
4. IMPACTOFCOMPILEROPTIMIZATIONS
To evaluate the impact of compiler optimizations on the
overall energy consumption, we used a high-level compilation
framework based on loop (iteration space) and data (ar-
ray layout) transformations. For this study, the framework
proposed in [14] was enhanced with iteration space tiling,
loop fusion, loop distribution, loop unrolling, and scalar re-
placement. Thus, our compiler is able to apply a suitable
combination of loop and data transformations for a given
input code, with an optimization selection criteria similar
to that presented in [14]. Our enhanced framework takes as
input a code written in C and applies these optimizations
(primarily) to improve temporal and spatial data locality.
The tiling technique employed is similar to one explained in
[27] and selects a suitable tile size for a given code, input
size, and cache configuration. The loop unrolling algorithm
carefully weighs the advantages of increasing register reuse
and the disadvantages of larger loop nests in selecting an
optimal degree of unrolling and is similar in spirit to the
technique discussed in [5].
Energy
Consumed
in
Memory
System
Components
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Buses
Icache
Dcache
I/O Pads
Imemory
Dmemory2060100%
Energy
Consumed
in
Memory
System
Components
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Buses
Icache
Dcache
I/O Pads
Imemory
Dmemory
Figure
5: Energy distribution in memory system after applying code transformations: (a) 1K 4way Dcache
and (b) 8K 4way Dcache. Imemory and Dmemory are the energies consumed in accessing the main memory
for instructions and data, respectively.
There have been numerous studies showing the effectiveness
of these optimizations on performance (e.g., [21; 28]); their
impact on energy consumption of different parts of a computing
system, however, remains largely unstudied. This
study is important because these optimizations are becoming
popular in embedded systems, keeping pace with the
increased use of high-level languages and compilation techniques
on these systems [18]. Through a detailed analysis of
the energy variations brought about by these techniques, architects
can see which components are energy hotspots and
develop suitable architectural solutions to account for the
influence of these optimizations.
Our expectation is that most compiler optimizations (in particular
when they are targeted at improving data locality)
will reduce the overall energy consumed in memory subsys-
tem. This is a side effect of reducing the number of off-chip
data accesses and satisfying the majority of the references
from the cache. Their impact on the energy consumed in the
datapath, on the other hand, is not as clear. As observed in
Section 3, the energy consumed in the memory subsystem
is much higher than that consumed in the datapath. While
this might be true for unoptimized codes (due to the large
number of off-chip accesses), it would be interesting to see
whether this still holds after the locality-enhancing compiler
optimizations.
Table
also shows the resulting datapath and memory system
energy consumption as a result of applying our compiler
transformations. The most interesting observation is
that the optimizations increase the datapath power for all
codes except btrix. This increase is due to more complex
loop structures and array subscript expressions as a result of
the optimizations. Since, in optimizing btrix, the compiler
used only linear loop transformations (i.e., the transformations
that contain only loop permutation, loop reversal, and
loop skewing [28]), the datapath energy did not increase.
Next, we observe that the reduction in the memory system
energy makes the datapath power more significant. For ex-
ample, after the optimizations, in the mxm benchmark, the
datapath power constitutes 29% of the overall system energy
for a 8K 8way cache configuration (as compared to 12.3%
before the optimizations). In fact, the datapath power becomes
larger than that consumed in the memory system if
we do not consider the energy expended in instruction ac-
cesses. This is significant as our optimizations were targeted
only at improving the data cache performance. Thus, it is
important for architects to continue to look at optimizing
the datapath energy consumption rather than focus only on
memory system optimizations.
The compiler optimizations had little effect on the energy
distribution on the datapath components and pipeline stages
as shown in Figures 4(a) and (b). However, the energy distribution
(shown in Figure 5) in the memory system shows
distinct differences from the unoptimized (original) versions
(see
Figure
3). In the optimized case, the relative contribution
of the main memory is significantly reduced due to
more data cache hits. Hence, we observe that the contribution
of the Icache and Dcache energy consumption becomes
more significant for all optimized codes that we used.
Thus, energy-efficient Icache and Dcache architectures become
more important when executing the compiler optimized
codes. The effectiveness of architectural and circuit
techniques to design energy-efficient caches is discussed in
Section 5.
As mentioned earlier, normally, our compiler automatically
selects a suitable set of optimizations for a given code and
cache topology. Since, in doing so, it uses heuristics, there is
no guarantee that it will arrive at an optimal solution. In addition
to this automatic optimization selection, we have also
implemented a directive-based optimization scheme which
relies on user-provided directives and, depending on them,
applies the necessary loop and data transformations. Next,
we forced the compiler using these compiler directives to
apply all eight combinations of three mainstream loop optimizations
[28], namely, loop unrolling, tiling, and linear
loop transformations to the mxm benchmark. The results
presented in Figure 6 reveal that the best compiler transformation
from the energy perspective varies based on the
cache configuration. This observation presents a new challenge
for the compiler writers of embedded systems, as the
most aggressive optimizations (although they may lead to
minimum execution times) do not necessarily result in the
best code from the energy point of view.
1-way 2-way 4-way 8-way
1-way 2-way 4-way 8-way
Memory
Energy
(Joules)
original
loop opt
unrolled
tiled
loop opt+unrolled
loop opt+tiled
tiled+unrolled
tiled+loop opt+unrolled
Figure
Energy distribution in the memory system
(with different DCache configurations) as a result of
different code transformations. original is the un-optimized
program, loop opt denotes the code optimized
using linear loop transformations, unrolled
denotes the version where loop unrolling is used and
tiled is the version when tiling is applied.
5. ENERGY EFFICIENT CACHE ARCHITECTURE
The study of cache energy consumption is relatively new
and the optimization techniques can be broadly classified
as circuit and architectural. The main circuit optimizations
include activating only a portion of the cells on the
bit (DBL) and word lines, reducing the bit line swings using
pulsed word lines (PWL) and isolated sense amplifiers
(IBL), and charge recycling in the I/O buffer [12]. The application
of these optimizations is independent of the code
sequences themselves. Many architectural techniques have
been proposed as optimizations for the memory system [25;
13; 16]. Many of these techniques introduce a new level of
memory hierarchy between the cache and the processor dat-
apath. For instance, the work by Kin et. al. [16] proposed
accessing a small filter cache before accessing the first level
cache. The idea is to reduce the energy consumption by
avoiding access to a larger cache. While such a technique
can have a negative impact on performance, it can result in
significant energy savings. The block-buffering (BB) mechanism
[13] uses a similar idea by accessing the last accessed,
buffered cache line before accessing the cache. Unlike circuit
optimizations, the effectiveness of these architectural
techniques is influenced by the application characteristics
and the compiler optimizations used. For instance, software
techniques can be used to improve the locality in a cache
line by grouping successively accessed data. Then, a cache
buffering scheme can exploit this improved locality. Thus,
increasing spatial locality within a cache line through software
techniques can save more energy. A detailed study
of such interactions between software optimizations and the
effectiveness of energy-efficient cache architectures will be
useful to both compiler writers and hardware designers.
To capture the impact of circuit optimization in the energy
estimation framework, we measured the influence of applying
different combinations of circuit optimizations using four
different layouts of a 0.5Kbits SRAM using HSPICE simu-
lations. It was observed that the energy consumed can be
reduced on an average by 29% and 52% as compared to an
unoptimized SRAM when applying the (PWL+IBL) and
(PWL+IBL+DBL) optimizations. We conservatively utilize
the 29% reduction achieved by the (PWL+IBL) scheme
to capture the efficiency of the circuit optimizations in our
analytical model for memory system energy. We refer to
the (PWL+IBL) scheme as IBL in the rest of this paper for
convenience.
First, we studied the interaction between the compiler optimizations
and the effectiveness of the BB mechanism. In
order to study this interaction, the Dcache was enhanced
to include a buffer for the last accessed set of cache blocks
(one block buffer for each way). A code that exhibits increased
spatial and temporal locality can effectively exploit
the buffer. We define the relative energy savings ratio of an
optimized code (opt) over an unoptimized code (orig) for a
given hardware optimization hopt as:
Relative energy savings ratio
are the energy consumed due to
the execution of optimized and unoptimized code respectively
without hopt, and E optcodehopt , E orighopt are the
corresponding values with hopt. This measure enables us
to evaluate the effectiveness of compiler optimizations in
exploiting the hardware optimization technique. Figure 7
shows the relative energy savings ratio for BB. It can be
observed that the block buffer mechanism was more effective
in reducing energy for the optimized codes (except for
eflux). This is due to the better spatial and temporal locality
exhibited by the compiler optimized codes. This improved
locality results in more hits in the block buffer. On
an average, the optimized codes achieve 19% (18%) more
energy savings relative to the original codes using a direct-mapped
(4way) cache with BB. The reason that optimized
eflux code does not take better advantage of BB than un-optimized
code is that the accesses with temporal locality
in the unoptimized code were better clustered, leading to
increased data reuse in the block buffer. Next, we applied a
combination of the IBL and BB and executed the optimized
codes to find the combined effect of circuit, architectural
and software optimizations on the overall memory system
energy. It can be observed from Figure 8 that the Dcache
energy consumption can be reduced by 58.8% (58.7%) for
the direct-mapped (4way) cache configuration. Thus, architectural
and circuit techniques working together can reduce
the energy consumption of even highly optimized codes sig-
nificantly. While the BB and IBL optimizations are very
effective for reducing the energy consumed in the Dcache, it
is important to investigate their impact on the overall memory
energy reduction. It was found that the memory system
energy reduces by 6.7% (11%) using the direct-mapped
(4way) cache configuration (see Figure 9).
We also investigated the influence of the BB+IBL optimization
for the Dcache due to the reduction in the energy per
main memory access (Em) as a result of emerging technologies
such as the embedded DRAM (eDRAM) [22]. Figure 10
shows that the combined BB and IBL technique reduces
memory system energy by 27.7% with new (future) tech-
7Relative
Energy
Savings
of
Optimized
over
Unoptimized
Codes
(a)
btrix
mxm
vpenta
adi
dtdtz
bmcm
psmoo
eflux
amhmtm
-0.3
-0.2
-0.10.10.30.5Relative
Energy
Savings
of
Optimized
over
Unoptimized
Codes
(b)
btrix
mxm
vpenta
adi
dtdtz
bmcm
psmoo
eflux
amhmtm
Figure
7: Relative energy savings ratio of Dcache for optimized code over unoptimized code using BB on (a)
1way Dcaches and (b) 4way Dcaches.
nologies that have a potential to reduce the per access energy
by an order of magnitude (We use a Em=4.95e-10J)
as compared to the 16.3% reduction in current technology
(Em=4.95e-9J). SimplePower can similarly be used to evaluate
the influence of other new technologies and energy-efficient
techniques such as BB on the energy consumed by
the system as a whole and an individual component in particular
Next, we evaluated the combination of a most recently used
way-prediction cache and BB mechanism. The way-prediction
caches have been used to address the longer cycle time in
associative caches as compared to direct mapped caches
[4]. While most prior effort has focussed on way-prediction
caches for addressing the performance problem, the energy
efficiency of these cache architectures was evaluated recently
by Inoue et. al. [10]. In their work, an MRU (Most Recently
Used) algorithm that predicts and probes only a single way
first was used. If the prediction turns out to fail, all remaining
ways are accessed at the same time in the next
cycle. We refer to this technique as the MRU scheme and
the caches that use them as MRU caches. It must be noted
that MRU caches could increase the cache access cycle time
[4; 15]. However, our work focus is on energy estimation and
optimization rather than investigating energy-performance
tradeoffs. Here, we study the effectiveness of combining two
different architectural techniques to optimize system energy
and also evaluate the impact of software optimizations enabled
by the SimplePower optimizing compiler on the MRU
prediction.
We studied the energy savings that can be obtained using
MRU caches for 4way associative cache configurations. It
can be observed from Figure 11 that the optimized codes
benefit more from the MRU scheme and can obtain 21%
more savings than the original code on an average. The
increased locality in the optimized codes increases the number
of successful probes in the predicted way of the MRU
cache. We also find that using the MRU scheme reduces
Dcache energy by 70.2% on an average for optimized codes
as compared to using a conventional 8K, 4way associative
caches (see Figure 12(a)). The incremental addition of BB
and IBL provided additional 10.5% and 5.5% energy reduction
respectively. Figure 12(b) shows the energy savings in
the entire memory system are 23%, 24.2% and 26.4% when
MRU, BB and IBL are applied incrementally in that order.
-0.10.10.30.5Relative
Energy
Savings
of
Optimized
over
Unoptimized
Codes
btrix
mxm
vpenta
adi
dtdtz
bmcm
psmoo
eflux
amhmtm
Figure
Relative energy savings ratio of Dcache
for optimized code over unoptimized code using
MRU for 4way Dcaches.
From the study in this section, we find that the optimized
codes are not only efficient in reducing the number of costly
(in terms of energy) accesses to main memory but they are
also more effective in exploiting the energy efficient architectural
mechanisms such as MRU caches and BB. We also
find that the incremental benefits of applying the BB scheme
over a MRU cache is significantly smaller as compared to using
these techniques individually. A designer can use similar
early energy estimates provided by SimplePower to perform
energy-cost-performance tradeoffs for new energy efficient
techniques.
6. IMPLICATIONSOFENERGY-EFFICIENT
Emerging new technologies combined with the energy-efficient
circuit, architectural and compiler techniques for reducing
memory system energy can potentially create a paradigm
shift in the importance of energy optimizations from the
memory system to the datapath and other units. Here, we
consider the influence of changes in the energy consumed per
main memory access, Em . Such changes are eminent due to
new process technologies [22] and reduction in physical distance
between the main memory and the datapath. Table 3
shows the memory system energy for different values of Em
for four different cache organizations using two optimized
codes. Note that is the value that we
have used so far in this paper. The lowest Em value that we
experiment with in this section (4:95 \Theta 10 \Gamma11 ) corresponds to
Dcache
Energy
Consumption
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Base
Dcache
Energy
Consumption
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Base
Figure
8: Dcache energy consumption of optimized codes using (BB + IBL) for (a) 1way 8K Dcaches and
(b) 4way 8K Dcaches.0.10.30.50.7Memory
System
Energy
Consumption(J)
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Base
Memory
System
Energy
Consumption
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Base
Figure
9: Memory system energy for optimized codes using (BB using (a) 1way 8K Dcaches and (b)
4way 8K Dcaches.0.10.30.50.7
Memory
System
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Base
Memory
System
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Base
Figure
10: Memory system energy for optimized codes using (BB using 4way 16K Dcaches with (a)
Data
Cache
Energy
Consumption
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(a)
Base
MRU
Memory
System
Energy
Consumption
tomcatv btrix mxm vpenta adi dtdtz bmcm psmoo eflux amhmtm
(b)
Base
MRU
MRU+BB+IBL
Figure
12: Energy consumption when a combination of MRU, BB and IBL techniques are applied to an 8K,
4way associative cache configuration (Base) in (a) Dcache (b) Memory System.
the magnitude of energy per first-level on-chip cache access
with current technology.
Recall that the datapath energy consumption for the optimized
mxm and psmoo codes were 83.7mJ and 16.1mJ, respectively
(see Table 2). Considering the fact that large
amounts of main memory storage capacity are coming closer
to the CPU [22], we expect to see Em values lower than
in the future. Such a change could make
the energy consumed in the datapath larger than the energy
consumed in memory. For example, with
and a 1K, 4-way cache, the energy values of datapath becomes
larger than that of the memory system for mxm.
7. CONCLUSIONS
The need for energy efficient architectures has become more
critical than ever with the proliferation of embedded devices.
Also, the increasing complexity of the emerging systems on
a chip paradigm makes it essential to make good energy-conscious
decisions early in the design cycle to help define
design parameters and eliminate incorrect design paths.
This study has introduced a comprehensive framework that
can provide such early energy estimates at the architectural
level. The uniqueness of this framework is that it captures
the integrated impact of both hardware and software optimizations
and provides the ability to study the system as
a whole and each individual component in isolation. This
work has tried to answer some of the questions raised in
Section 1 using this framework. The major findings of our
research are the following:
ffl A transition-sensitive, cycle-accurate, architectural-level
approach can be used to provide a fast (as compared to
circuit-level simulators) and relatively accurate estimate of
the energy consumption of the datapath. For example, the
register file energy estimates from our simulator are within
2% of circuit level simulation.
ffl The energy hotspots in the datapath were identified to be
the pipeline registers and the register file. They consume 58-
70% of the overall datapath energy for executing (original)
unoptimized codes. However, the datapath energy is found
to be an order or two magnitude less the memory system
energy for these multidimensional array codes.
ffl The main memory energy consumption accounts for almost
all the system energy for small cache configurations
when executing unoptimized codes. The application of high-level
compiler optimizations significantly reduces the main
memory energy causing the Dcache, Icache and datapath
energy contributions to become more significant. For exam-
ple, the contribution of datapath energy to overall system
energy, with an 8K, 8way Dcache, increases from 12.3% to
29.5% when benchmark mxm is optimized.
ffl The improved spatial and temporal locality of the optimized
codes is useful in not only reducing the accesses to the
main memory but also in exploiting energy-efficient cache architectures
better than with unoptimized codes. Optimized
codes saved 21% times more energy using the most recently
used way-predicting cache scheme as compared to executing
unoptimized codes. They also save 19% more energy when
using block buffering.
Emerging technologies coupled with a combination of
energy-efficient circuit, architectural and compiler optimizations
can shift the energy hotspot. We found that with an
order of magnitude reduction in main memory energy access
made possible with eDRAM technology, the datapath
energy consumption becomes larger than the memory system
energy when executing an optimized mxm code with a
4way Dcache.
In this work, we observed that the compiler optimizations
provided the most significant energy savings over the entire
system. The SimplePower framework can also be used
for evaluating the effect of high-level algorithmic, architec-
tural, and compilation trade-offs on energy. Also, we observed
that energy-efficient architectures can reduce the energy
consumed by even highly optimized code significantly
and, in fact, much better than with unoptimized codes. An
understanding of the interaction of hardware and software
optimizations on system energy gained from this work can
help both architects and compiler writers to develop more
energy-efficient systems.
This paper has looked at only a small subset of issues with
respect to studying the integrated impact of hardware-software
optimizations on energy. There are a lot of issues that are
ripe for future research. The interaction of algorithmic selec-
mxm
Confi- Memory Energy (mJ)
guration
1K, 1way 41.3 81.9 132.6 538.2 1,045.2 5,101.1 10,171.2 50,731.1
1K, 4way 38.1 45.9 55.8 134.3 232.6 1,018.4 2,000.9 9,860.0
4K, 1way 80.8 91.3 104.5 210.1 342.0 1,397.9 2,717.6 13,275.3
4K, 4way 86.2 89.2 92.9 122.5 159.6 455.9 826.2 3,788.9
psmoo
Confi- Memory Energy (mJ)
guration
1K, 1way 13.1 35.0 62.5 282.2 556.8 2,753.4 5,499.3 27,466.1
1K, 4way 9.4 15.6 23.4 85.8 163.7 787.3 1,566.8 7,802.6
4K, 1way 17.3 21.7 27.2 70.9 125.8 563.9 1,111.7 5,493.6
4K, 4way 18.4 21.3 24.8 52.9 88.2 370.2 723.0 3,542.1
Table
3: Impact of different Em values on total memory system energy consumption for optimized mxm and
psmoo.
tion, low-level compiler optimizations and other low-power
memory structures will be addressed in our future work.
8.
ACKNOWLEDGEMENTS
The authors would like to thank the anonymous reviewers
whose commments helped to improve this paper. This work
was sponsored in part by grants from NSF (MIP-9705128),
Sun Microsystems, and Intel.
9.
--R
High performance DSPs - what's hot and what's not? <Proceedings>In Proceedings of International Symposium on Low Power Electronics and Design</Proceedings>
Emerging power management tools for processor design.
The simplescalar tool set
Predictive sequential associative cache.
Custom memory management methodology - exploration of memory organization for embedded multimedia system design
Low Power Digital CMOS Design.
Clock power issues in system-on-chip designs
Validation of an architectural level power analysis technique.
Energy issues in multimedia systems.
Trends in low-power ram circuit technologies
Analytical energy dissipation models for low power caches.
Improving locality using loop and data transformations in an integrated framework.
Inexpensive implementations of self-associativity
The filter cache
A framework for estimating and minimizing energy dissipation of embedded hw/sw systems.
Code Generation and Optimization for Embedded Digital Signal Processors.
Power consumption estimation in cmos vlsi chips.
Energy characterization based on clustering.
Advanced Compiler Design Implementation.
Software design for low power.
Memory exploration for low power
Cache designs for energy efficiency.
Instruction level power analysis and optimization of software.
Combining loop transformations considering caches and scheduling.
High Performance Compilers for Parallel Computing.
--TR
Inexpensive implementations of set-associativity
Instruction level power analysis and optimization of software
Energy characterization based on clustering
Combining loop transformations considering caches and scheduling
Analytical energy dissipation models for low-power caches
Software design for low power
The filter cache
Unroll-and-jam using uniformly generated sets
A framework for estimation and minimizing energy dissipation of embedded HW/SW systems
Validation of an architectural level power analysis technique
High performance DSPs - what''s hot and what''s not?
Emerging power management tools for processor design
Advanced compiler design and implementation
Improving locality using loop and data transformations in an integrated framework
Way-predicting set-associative cache for high performance and low energy consumption
Low Power Digital CMOS Design
M32R/D-Integrating DRAM and Microprocessor
Cache designs for energy efficiency
Predictive sequential associative cache
Clock Power Issues in System-on-a-Chip Designs
Code generation and optimization for embedded digital signal processors
--CTR
G. Palermo , C. Silvano , S. Valsecchi , V. Zaccaria, A system-level methodology for fast multi-objective design space exploration, Proceedings of the 13th ACM Great Lakes symposium on VLSI, April 28-29, 2003, Washington, D. C., USA
L. Salvemini , M. Sami , D. Sciuto , C. Silvano , V. Zaccaria , R. Zafalon, A methodology for the efficient architectural exploration of energy-delay trade-offs for embedded systems, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
N. Vijaykrishnan , Mahmut Kandemir , Mary Jane Irwin , Hyun Suk Kim , Wu Ye , David Duarte, Evaluating Integrated Hardware-Software Optimizations Using a Unified Energy Estimation Framework, IEEE Transactions on Computers, v.52 n.1, p.59-76, January
Eui-Young Chung , Luca Benini , Giovanni De Micheli, Automatic source code specialization for energy reduction, Proceedings of the 2001 international symposium on Low power electronics and design, p.80-83, August 2001, Huntington Beach, California, United States
Gianluca Palermo , Cristina Silvano , Vittorio Zaccaria, Power-Performance System-Level Exploration of a MicroSPARC2-Based Embedded Architecture, Proceedings of the conference on Design, Automation and Test in Europe: Designers' Forum, p.20182, March 03-07,
Nam Sung Kim , Taeho Kgil , Valeria Bertacco , Todd Austin , Trevor Mudge, Microarchitectural power modeling techniques for deep sub-micron microprocessors, Proceedings of the 2004 international symposium on Low power electronics and design, August 09-11, 2004, Newport Beach, California, USA
Nam Sung Kim , Todd Austin , Trevor Mudge , Dirk Grunwald, Challenges for architectural level power modeling, Power aware computing, Kluwer Academic Publishers, Norwell, MA, 2002
Jun Yang , Rajiv Gupta, Energy-efficient load and store reuse, Proceedings of the 2001 international symposium on Low power electronics and design, p.72-75, August 2001, Huntington Beach, California, United States
David M. Brooks , Pradip Bose , Stanley E. Schuster , Hans Jacobson , Prabhakar N. Kudva , Alper Buyuktosunoglu , John-David Wellman , Victor Zyuban , Manish Gupta , Peter W. Cook, Power-Aware Microarchitecture: Design and Modeling Challenges for Next-Generation Microprocessors, IEEE Micro, v.20 n.6, p.26-44, November 2000
Peter Petrov , Alex Orailoglu, Data cache energy minimizations through programmable tag size matching to the applications, Proceedings of the 14th international symposium on Systems synthesis, September 30-October 03, 2001, Montral, P.Q., Canada
David Brooks , Pradip Bose , Margaret Martonosi, Power-performance simulation: design and validation strategies, ACM SIGMETRICS Performance Evaluation Review, v.31 n.4, p.13-18, March 2004
G. Esakkimuthu , N. Vijaykrishnan , M. Kandemir , M. J. Irwin, Memory system energy (poster session): influence of hardware-software optimizations, Proceedings of the 2000 international symposium on Low power electronics and design, p.244-246, July 25-27, 2000, Rapallo, Italy
Todd Austin , Eric Larson , Dan Ernst, SimpleScalar: An Infrastructure for Computer System Modeling, Computer, v.35 n.2, p.59-67, February 2002
Diana Marculescu , Anoop Iyer, Application-driven processor design exploration for power-performance trade-off analysis, Proceedings of the 2001 IEEE/ACM international conference on Computer-aided design, November 04-08, 2001, San Jose, California
Kang , Mahmut Kandemir , Narayanan Vijaykrishnan , Mary Jane Irwin , Rajarathnam Chandramouli, Studying Energy Trade Offs in Offloading Computation/Compilation in Java-Enabled Mobile Devices, IEEE Transactions on Parallel and Distributed Systems, v.15 n.9, p.795-809, September 2004
Lee , Shidhartha Das , Valeria Bertacco , Todd Austin , David Blaauw , Trevor Mudge, Circuit-aware architectural simulation, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Weiping Liao , Lei He, Power modeling and reduction of VLIW processors, Compilers and operating systems for low power, Kluwer Academic Publishers, Norwell, MA,
Victor Zyuban, Unified architecture level energy-efficiency metric, Proceedings of the 12th ACM Great Lakes symposium on VLSI, April 18-19, 2002, New York, New York, USA
Luis Villa , Michael Zhang , Krste Asanovi, Dynamic zero compression for cache energy reduction, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.214-220, December 2000, Monterey, California, United States
V. Delaluz , M. Kandemir , N. Vijaykrishnan , M. J. Irwin , A. Sivasubramaniam , I. Kolcu, Compiler-Directed Array Interleaving for Reducing Energy in Multi-Bank Memories, Proceedings of the 2002 conference on Asia South Pacific design automation/VLSI Design, p.288, January 07-11, 2002
Gianluca Palermo , Cristina Silvano , Vittorio Zaccaria, Multi-objective design space exploration of embedded systems, Journal of Embedded Computing, v.1 n.3, p.305-316, August 2005
Ozgur Celebican , Tajana Simunic Rosing , Vincent J. Mooney, III, Energy estimation of peripheral devices in embedded systems, Proceedings of the 14th ACM Great Lakes symposium on VLSI, April 26-28, 2004, Boston, MA, USA
Trevor Mudge, Power: A First-Class Architectural Design Constraint, Computer, v.34 n.4, p.52-58, April 2001
K. Ananda Vardhan , Y. N. Srikant, Transition aware scheduling: increasing continuous idle-periods in resource units, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Yongxin Zhu , Weng-Fai Wong , tefan Andrei, An integrated performance and power model for superscalar processor designs, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
G. Chen , M. Kandemir , N. Vijaykrishnan , M. J. Irwin , W. Wolf, Energy savings through compression in embedded Java environments, Proceedings of the tenth international symposium on Hardware/software codesign, May 06-08, 2002, Estes Park, Colorado
Jung-Hi Min , Hojung Cha , Vason P. Srini, Dynamic power management of DRAM using accessed physical addresses, Microprocessors & Microsystems, v.31 n.1, p.15-24, February, 2007
Todd L. Cignetti , Kirill Komarov , Carla Schlatter Ellis, Energy estimation tools for the
Palm
G. Chen , Mahmut T. Kandemir , Narayanan Vijaykrishnan , Mary Jane Irwin , Mario Wolczko, Adaptive Garbage Collection for Battery-Operated Environments, Proceedings of the 2nd Java Virtual Machine Research and Technology Symposium, p.1-12, August 01-02, 2002
I. Kadayif , M. Kandemir , M. Karakoy, An energy saving strategy based on adaptive loop parallelization, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
S. Kim , N. Vijaykrishnan , M. Kandemir , M. J. Irwin, Energy-efficient instruction cache using page-based placement, Proceedings of the 2001 international conference on Compilers, architecture, and synthesis for embedded systems, November 16-17, 2001, Atlanta, Georgia, USA
Mahmut Kandemir , N. Vijaykrishnan , Mary Jane Irwin, Compiler optimizations for low power systems, Power aware computing, Kluwer Academic Publishers, Norwell, MA, 2002
Giovanni Agosta , Gianluca Palermo , Cristina Silvano, Multi-objective co-exploration of source code transformations and design space architectures for low-power embedded systems, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
I. Kadayif , M. Kandemir , U. Sezer, An integer linear programming based approach for parallelizing applications in On-chip multiprocessors, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
V. Delaluz , A. Sivasubramaniam , M. Kandemir , N. Vijaykrishnan , M. J. Irwin, Scheduler-based DRAM energy management, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
Ramon Canal , Antonio Gonzlez , James E. Smith, Very low power pipelines using significance compression, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.181-190, December 2000, Monterey, California, United States
I. Kadayif , M. Kandemir, Tuning In-Sensor Data Filtering to Reduce Energy Consumption in Wireless Sensor Networks, Proceedings of the conference on Design, automation and test in Europe, p.20852, February 16-20, 2004
Russ Joseph , Margaret Martonosi, Run-time power estimation in high performance microprocessors, Proceedings of the 2001 international symposium on Low power electronics and design, p.135-140, August 2001, Huntington Beach, California, United States
O. Ozturk , G. Chen , M. Kandemir , M. Karakoy, Cache miss clustering for banked memory systems, Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design, November 05-09, 2006, San Jose, California
Gilberto Contreras , Margaret Martonosi , Jinzhan Peng , Roy Ju , Guei-Yuan Lueh, XTREM: a power simulator for the Intel XScale core, ACM SIGPLAN Notices, v.39 n.7, July 2004
Mahmut Kandemir , J. Ramanujam , A. Choudhary, Exploiting shared scratch pad memory space in embedded multiprocessor systems, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
Min Zhao , Bruce Childers , Mary Lou Soffa, Predicting the impact of optimizations for embedded systems, ACM SIGPLAN Notices, v.38 n.7, July
Mats Brorsson , Mikael Collin, Adaptive and flexible dictionary code compression for embedded applications, Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, October 22-25, 2006, Seoul, Korea
Peter Grun , Nikil Dutt , Alex Nicolau, APEX: access pattern based memory architecture exploration, Proceedings of the 14th international symposium on Systems synthesis, September 30-October 03, 2001, Montral, P.Q., Canada
John S. Seng , Eric S. Tune , Dean M. Tullsen, Reducing power with dynamic critical path information, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Huiyang Zhou , Mark C. Toburen , Eric Rotenberg , Thomas M. Conte, Adaptive mode control: A static-power-efficient cache design, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.3, p.347-372, August
D. Brooks , P. Bose , V. Srinivasan , M. K. Gschwind , P. G. Emma , M. G. Rosenfield, New methodology for early-stage, microarchitecture-level power-performance analysis of microprocessors, IBM Journal of Research and Development, v.47 n.5-6, p.653-670, September
Viji Srinivasan , David Brooks , Michael Gschwind , Pradip Bose , Victor Zyuban , Philip N. Strenski , Philip G. Emma, Optimizing pipelines for power and performance, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Gilberto Contreras , Margaret Martonosi , Jinzhang Peng , Guei-Yuan Lueh , Roy Ju, The XTREM power and performance simulator for the Intel XScale core: Design and experiences, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.1, February 2007
Lode Nachtergaele , Vivek Tiwari , Nikil Dutt, System and architecture-level power reduction of microprocessor-based communication and multi-media applications, Proceedings of the 2000 IEEE/ACM international conference on Computer-aided design, November 05-09, 2000, San Jose, California
I. Kadayif , M. Kandemir , N. Vijaykrishnan , M. J. Irwin , J. Ramanujam, Morphable Cache Architectures: Potential Benefits, ACM SIGPLAN Notices, v.36 n.8, p.128-137, Aug. 2001
Yang , Wayne Wolf , N. Vijaykrishnan , D. N. Serpanos , Yuan Xie, Power Attack Resistant Cryptosystem Design: A Dynamic Voltage and Frequency Switching Approach, Proceedings of the conference on Design, Automation and Test in Europe, p.64-69, March 07-11, 2005
Gilles Pokam , Olivier Rochecouste , Andr Seznec , Franois Bodin, Speculative software management of datapath-width for energy optimization, ACM SIGPLAN Notices, v.39 n.7, July 2004
I. Kadayif , A. Sivasubramaniam , M. Kandemir , G. Kandiraju , G. Chen, Generating physical addresses directly for saving instruction TLB energy, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
G. Chen , R. Shetty , M. Kandemir , N. Vijaykrishnan , M. J. Irwin , M. Wolczko, Tuning garbage collection for reducing memory system energy in an embedded java environment, ACM Transactions on Embedded Computing Systems (TECS), v.1 n.1, p.27-55, November 2002
Eduardo Pinheiro , Ricardo Bianchini , Enrique V. Carrera , Taliver Heath, Dynamic cluster reconfiguration for power and performance, Compilers and operating systems for low power, Kluwer Academic Publishers, Norwell, MA,
Daniele Folegnani , Antonio Gonzlez, Energy-effective issue logic, ACM SIGARCH Computer Architecture News, v.29 n.2, p.230-239, May 2001
Michael Huang , Jose Renau , Seung-Moon Yoo , Josep Torrellas, A framework for dynamic energy efficiency and temperature management, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.202-213, December 2000, Monterey, California, United States
I. Kadayif , A. Sivasubramaniam , M. Kandemir , G. Kandiraju , G. Chen, Optimizing instruction TLB energy using software and hardware techniques, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.10 n.2, p.229-257, April 2005
J. Adam Butts , Gurindar S. Sohi, A static power model for architects, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.191-201, December 2000, Monterey, California, United States
Peter Grun , Nikil Dutt , Alex Nicolau, Access pattern-based memory and connectivity architecture exploration, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.1, p.33-73, February
Daniele Folegnani , Antonio Gonzlez, Energy-effective issue logic, ACM SIGARCH Computer Architecture News, v.29 n.2, p.230-239, May 2001
Kathleen Baynes , Chris Collins , Eric Fiterman , Brinda Ganesh , Paul Kohout , Christine Smit , Tiebing Zhang , Bruce Jacob, The performance and energy consumption of three embedded real-time operating systems, Proceedings of the 2001 international conference on Compilers, architecture, and synthesis for embedded systems, November 16-17, 2001, Atlanta, Georgia, USA
Jason Flinn , M. Satyanarayanan, Managing battery lifetime with energy-aware adaptation, ACM Transactions on Computer Systems (TOCS), v.22 n.2, p.137-179, May 2004
N. Vijaykrishnan , M. Kandemir , S. Kim , S. Tomar , A. Sivasubramaniam , M. J. Irwin, Energy behavior of java applications from the memory perspective, Proceedings of the JavaTM Virtual Machine Research and Technology Symposium on JavaTM Virtual Machine Research and Technology Symposium, p.23-23, April 23-24, 2001, Monterey, California
Soontae Kim , N. Vijaykrishnan , Mahmut Kandemir , Anand Sivasubramaniam , Mary Jane Irwin, Partitioned instruction cache architecture for energy efficiency, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.2, p.163-185, May
H. Saputra , M. Kandemir , N. Vijaykrishnan , M. J. Irwin , J. S. Hu , C-H. Hsu , U. Kremer, Energy-conscious compilation based on voltage scaling, ACM SIGPLAN Notices, v.37 n.7, July 2002
Nikil Dutt , Alex Nicolau , Hiroyuki Tomiyama , Ashok Halambi, New directions in compiler technology for embedded systems (embedded tutorial), Proceedings of the 2001 conference on Asia South Pacific design automation, p.409-414, January 2001, Yokohama, Japan
Kathleen Baynes , Chris Collins , Eric Fiterman , Brinda Ganesh , Paul Kohout , Christine Smit , Tiebing Zhang , Bruce Jacob, The Performance and Energy Consumption of Embedded Real-Time Operating Systems, IEEE Transactions on Computers, v.52 n.11, p.1454-1469, November
Victor Delaluz , Mahmut Kandemir , N. Vijaykrishnan , Anand Sivasubramaniam , Mary Jane Irwin, Hardware and Software Techniques for Controlling DRAM Power Modes, IEEE Transactions on Computers, v.50 n.11, p.1154-1173, November 2001
Victor De La Luz , Ismail Kadayif , Mahmut Kandemir , Uger Sezer, Access Pattern Restructuring for Memory Energy, IEEE Transactions on Parallel and Distributed Systems, v.15 n.4, p.289-303, April 2004
A. Parikh , Soontae Kim , M. Kandemir , N. Vijaykrishnan , M. J. Irwin, Instruction Scheduling for Low Power, Journal of VLSI Signal Processing Systems, v.37 n.1, p.129-149, May 2004
Victor De La Luz , Mahmut Kandemir, Array Regrouping and Its Use in Compiling Data-Intensive Embedded Applications, IEEE Transactions on Computers, v.53 n.1, p.1-19, January 2004
Ismail Kadayif , Mahmut Kandemir , Guilin Chen , Ozcan Ozturk , Mustafa Karakoy , Ugur Sezer, Optimizing Array-Intensive Applications for On-Chip Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.16 n.5, p.396-411, May 2005
Pin Zhou , Vivek Pandey , Jagadeesan Sundaresan , Anand Raghuraman , Yuanyuan Zhou , Sanjeev Kumar, Dynamic tracking of page miss ratio curve for memory management, ACM SIGOPS Operating Systems Review, v.38 n.5, December 2004
I. Kadayif , M. Kandemir , G. Chen , N. Vijaykrishnan , M. J. Irwin , A. Sivasubramaniam, Compiler-directed high-level energy estimation and optimization, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.4, p.819-850, November 2005
Ning An , Sudhanva Gurumurthi , Anand Sivasubramaniam , Narayanan Vijaykrishnan , Mahmut Kandemir , Mary Jane Irwin, Energy-performance trade-offs for spatial access methods on memory-resident data, The VLDB Journal The International Journal on Very Large Data Bases, v.11 n.3, p.179-197, November 2002 | compiler optimizations;energy optimization and estimation;hardware-software interaction;system energy;energy simulator;low-power architectures |
339891 | Integral Equation Preconditioning for the Solution of Poisson''s Equation on Geometrically Complex Regions. | This paper is concerned with the implementation and investigation of integral equation based solvers as preconditioners for finite difference discretizations of Poisson equations in geometrically complex domains. The target discretizations are those associated with "cut-out" grids. We discuss such grids and also describe a software structure which enables their rapid construction. Computational results are presented. | Introduction
. This paper deals with the creation of effective solvers for the solution
of linear systems of equations arising from the discretization of Poisson's equation
in multiply connected, geometrically complex, domains. The focus is on discretizations
associated with "cut-out" grids (grids which result from excluding select points from a
uniform grid).
The solvers we describe are iterative procedures which use integral equation solutions
(such as those described in [9, 14, 15, 16, 17, 21]) as preconditioners. One may
question the need for such an approach; "If one is going to the trouble to implement an
integral equation solver, why bother with solving the discrete equations?". The need
for such an approach arises in applications in which the solution of the linear system
is of primary importance and obtaining the solution of the partial differential equation
is a secondary matter. An application where this occurs (and the one which inspired
this work) is the implementation of the discrete projection operator associated with the
numerical solution of the incompressible Navier-Stokes equations [12]. Our selection
of the integral equation procedure as a preconditioner was motivated by its ability to
generate solutions for multiply connected domains possessing complex geometry.
With regard to the use of "cut-out" grids to discretize Poisson's equation we are
re-visiting an old technique - the particular discretization procedure used is credited
to Collatz (1933)[8]. For Poisson's equations, the concept behind the discretization
procedure is not complicated; but the actual construction of discrete equations for
general geometric configurations using this concept can be. As we will discuss, by
combining computer drawing tools with an intermediate software layer which exploits
polymorphism (a feature of object oriented languages) the construction of equations
can be simplified greatly.
Department of Mathematics, UCLA, Los Angeles, California 90024. This research was supported
in part by Office of Naval Research grant ONR-N00014-92-J-1890 and Air Force Office of Scientific
Research Grant AFOSR-F49620-96-1-0327
y Advanced Development Group, Viewlogic Systems Inc., Camarillo, CA 93010 (ali@qdt.com).
In the first section we briefly discuss the constituent components of the complete
procedure - the discretization associated with a "cut-out" grid, the iterative method
chosen to solve the discrete equations, and the integral equation based preconditioner.
In the second section we present numerical results, and in the appendix we discuss
details of the integral equation method.
While our solution procedure is developed for discretizations based on "cut-out"
grids, the results should be applicable to discretizations associated with other grids
(e.g. triangulations or mapped grids). Additionally, there has been active research
on discretization procedures for other equations using cut-out grids; e.g. equations for
compressible and incompressible flow [2, 4, 5, 7, 18, 25, 26, 27]. The method we describe
for constructing equations could be extended to those discretizations as well.
2. Preliminaries.
2.1. The Mathematical Problem. The target problem is the solution of Pois-
son's equation on a multiply connected, geometrically complex domain.
Let\Omega be a
bounded domain in the plane with a C 2 boundary consisting of M inner contours
@\Omega M , and one bounding contour
@\Omega M+1
Fig. 1). Given
dW dW
dW
Fig. 1. A bounded domain.
boundary data g and forcing function f , we seek the solution to the following equation:
2\Omega (1)
lim
x!xo
@\Omega
In the unbounded
case,\Omega is the unbounded domain that lies exterior to M contours
Fig. 2), and we seek a solution to
2\Omega (2)
lim
x!xo
@\Omega
dW dW
Fig. 2. An unbounded domain.
2.2. The Spatial Discretization. Approximate solutions to (1) or (2) are obtained
as solutions of a linear system of equations arising from finite difference dis-
cretizations. The discretization procedure we used was a "cut-out" grid approach
[2, 4, 5, 7, 18, 25, 26, 27]. We selected this discretization procedure because the formulation
of the linear system of equations requires little information about the geometry; one
need only know if grid points are inside, outside, or on the boundary of the domain and
(for points nearest the boundary) the distance of grid points to the boundary along a
coordinate axis. Thus, a program can easily be created which automatically constructs
a discretization based on information available from minimal geometric descriptions
(e.g. descriptions output from a drawing or CAD package).
To form our "cut-out" grid we consider a rectangular region R that contains the
domain\Omega (for unbounded problems, R contains the portion
of\Omega that we are interested
in). We discretize R with a uniform Cartesian grid, and separate the grid points into
three groups: regular, irregular, and boundary points. A regular point is a point whose
distance along a coordinate axis to any portion of the domain boundary
@\Omega is greater
than one mesh width. An irregular point is one whose distance to a portion of the
boundary is less than or equal to one mesh width but greater than zero, and boundary
grid points lie on the boundary (see Fig. 3 ). Regular and irregular points are further
identified as being interior or exterior to the domain. We compute an approximate
Poisson solution by discretizing (1) or (2) using the regular and irregular interior grid
points. These discrete equations are derived using centered differences and linear interpolation
(described as "Procedure B" in [26], and based on ideas presented as far back
as [8]): If we introduce the standard five point discrete Laplacian (here, h denotes the
mesh width of the Cartesian grid)
then at each regular interior point an equation is given by
regular interior point.
R
Fig. 3. A "cut-out" grid: the regular points are marked by circles, irregular points by crosses, and
boundary points by squares.
At each irregular interior point an equation is obtained by enforcing an interpolation
condition. Specifically, at an irregular point we specify that the solution value is a
linear combination of boundary values and solution values at other nearby points. For
example, for an irregular interior point x i;j with a regular interior point x i\Gamma1;j one mesh
width to its left, and a point on the boundary x R at a distance d R to its right (see Fig.
4), a second order Lagrange interpolating polynomial (linear interpolation) can be used
to specify an equation at x i;j :
x
d
x
R
R
dW
Fig. 4. Linear interpolation at an irregular interior point.
an irregular interior point.
If this linear interpolation procedure is used, and if the boundary and forcing functions
are sufficiently smooth, then the solution of the discrete equations yields values of second
order accuracy [26].
This discretization procedure produces a linear system of equations
where ~x consists of the solution values at all interior grid points, and ~ b involves both
the inhomogeneous forcing terms and the boundary values. Due to the interpolation
used, the matrix A is usually nonsymmetric.
2.3. Automated Construction of Discrete Equations. As previously remarked,
one benefit of using discretizations based upon cut-out grids is that their construction
requires a minimal amount of information from the geometry; thus one can create programs
which take geometric information output from rather modest drawing tools and
automatically construct the required discretizations.
The process we employed going from geometric information to the discretization is
described by the functional diagram in Fig. 5.
Drawing tool Text description
of geometry
Create software
object
representation
"cut-out" grid
Grid parameters
Fig. 5. Functional depiction of discretization process.
Key to this process is the introduction of an extra software layer between the
drawing tool and the program to create the discretization. In particular, we take a text
representation of the drawing and map this to a software representation in which each of
the entities that makes up the geometric description is represented as a distinct software
object. The program which creates the discretization uses only the functional interface
associated with these software objects. Hence, the discretization can be constructed
independently from any particular drawing tool output. (To accommodate output from
different drawing tools we are just required to construct code which maps the geometric
information to the software objects which represent it.)
The class description, using OMT notation [22], associated with the geometric
software objects is presented in Fig. 6. (While these classes were implemented as C++
classes, other languages which support class construction could be used).
As indicated in Fig. 6, there is a base class GeometricEntity which is used to
define a standard interface for all geometric entities. From this base class we derive
classes which implement the base class functionality for each particular type of geometric
entity. The types created were those which enabled a one-to-one mapping from typical
drawing tool output to software objects. Since a "drawing", as output from a drawing
tool, is typically a collection of geometric entities; a class CombinedGeometricEntity
was created to manage collections of their software counterparts.
double
double
double
double
double
getXcoordinate(double double
getYcoordinate(double double
getParametricCoordinate(double& s, double x, double
getUnitNormal(double s, double& n_x, double& n_y)
getUnitTangent(double s double& t_x, double& t_y)
interiorExteriorTest(double x , double
getSegmentIntersection(double& s, double x_a, double y_a, double x_b, double
GeometricEntity
CircleEntity PolygonEntity RectangleEntity EllipseEntity
Fig. 6. Description of classes used to store and access geometric information.
In the program which creates the discretization, only functionality associated with
the base class GeometricEntity is used. Thus, this program doesn't require modification
if the set of derived classes (i.e. classes implementing particular geometric entity
types) is changed or added to. The program will function with any new or changed
entity as long as that entity is derived from the base class and implements the base
class functionality. This class structure also enables the discretization program to use
procedures optimized for particular types of geometric entities. (For example, the in-
terior/exterior test for a circle is much more efficient than that for a general polygon.)
This occurs because polymorphism is supported; when a base class method is invoked
for a derived class, the derived class' implementation is used.
The success of the intermediate software layer depends upon the functionality associated
with the base classes. Ideally, the required functionality should be obtainable
with a small number of methods which are easy to implement. (The restriction on
the number of methods is desirable because each method must be implemented for all
derived classes.) As indicated from the class description, the functionality required to
construct "cut-out" grid discretizations and integral equation pre-conditioners can be
implemented with a very modest set of methods. It is this latter fact that makes the use
of "cut-out" grids attractive; complicated procedures are not required to incorporate
geometric information into the construction of a grid and discretizations associated with
such a grid.
2.4. Solution Procedure. The discrete equations (4-5) are solved using preconditioned
simple iteration. As discussed in the next section, with appropriate preconditioner
implementation, more sophisticated iterative procedures are not required. If P
is used to denote the preconditioner, and ~r n j ~ b \Gamma A~x n represents the residual error of
the nth iterate, then preconditioned simple iteration can be written as follows:
The general form of the preconditioner (or approximate inverse) is the solution
procedure (and its variants) described in [9, 14, 15, 17, 16, 21], coupled with a relaxation
step to improve its efficiency.
The procedure (without the relaxation step) begins by using a Fast Poisson solver
to obtain function values that approximately satisfy the Poisson equation at regular
interior points. These values do not satisfy the discrete equations at irregular interior
points nor do they satisfy the boundary condition; therefore, we correct them by adding
function values obtained from the solution of an integral equation. One challenge is to
determine the appropriate integral equation problem to supply this correction. This
task presents a challenge because we are mixing two types of discretization procedures,
finite difference and integral equation discretizations. Additionally, since we are using
the solution procedure as a preconditioner we wish to achieve reasonable results without
using a highly accurate (and thus more costly) integral equation solution.
The solution component which is obtained with the Fast Poisson solver is constructed
to satisfy
f(x i;j ); x i;j is a regular interior point
The correction to u FPS that satisfies the correct boundary conditions is a solution
of Laplace's equation with boundary conditions g IE
\Deltau IE
@\Omega
lim
x!xo
x2\Omega u IE
@\Omega
(Note: the correction for the unbounded case is similar, see [17] for details.)
This problem can be solved and evaluated at the regular interior points by using the
integral equation approach of Appendix A. The approximate solution to the discrete
Poisson problem is formed by combining the Fast Poisson solver solution with the
integral correction terms.
~
Standard truncation error analysis reveals that if x i;j is a regular interior point:
while if x i;j is an irregular interior point (for convenience we assume that the irregular
point is like the one shown in Fig. 4. Alternate cases will have analogous error terms):
~
The solution procedure leads to a truncation error that is formally second
fore, we expect that it will make a good preconditioner. However, since the accuracy
at the irregular points depends on the magnitude of ~ u xx (x) (or ~
there is a dependence
on the smoothness of u FPS (x). The smoothness of u FPS (x) depends on the
discrete forcing values used in (8), and these forcing terms may not be smooth because
the specified terms f(x i;j ) will be the residual errors of the iterative method (which
can be highly oscillatory) and because the zero extension used may result in forcing
values that are discontinuous across the boundary. To remedy this, we incorporate a
relaxation scheme as part of the preconditioning step. A common feature of relaxation
schemes is that they result in approximate solutions with smooth errors, even after only
a few iterations. Therefore, we apply our approximate Poisson solver to the smooth
error equation resulting from the relaxation step, and then combine these terms to form
the approximate solution.
That is, we first apply a few iterations of a relaxation scheme (point Jacobi).
an interior point
if (x i;j a regular interior point )
if (x i;j an irregular interior point )
After the relaxation step, we compute the residual error: If x i;j is a regular interior
point
and if x i;j is an irregular interior point
Next, the Fast Poisson solver is applied where the forcing consists of the residual error
of the relaxation iterate with zero extension.
e(x i;j ); x i;j an interior point
Then, the integral equation approach is used to solve the correcting Laplace problem.
\Deltau IE
lim
x!xo
x2\Omega u IE
@\Omega
Finally, the three terms are combined to form the approximate solution.
~ ~
A truncation error analysis shows that at regular interior points:
~ ~
while at irregular interior points (after making use of (14))
~ ~
hd R+ (v 3
The combined solution procedure (13-19) comprises the preconditioner for the iterative
solver. We expect that due to the smoother forcing values used in (17), u FPS will have
smaller second derivatives; therefore, ~ ~
should satisfy the discrete equations better than
~
u, hence the addition of smoothing to the solution procedure should result in a better
preconditioner.
3. Computational Results. The iterative procedure described above has been
implemented, and in this section we evaluate its effectiveness on two bounded domains.
For all domains and discretizations considered, we apply forcing values f(x;
6y 2 and boundary values g(x;
Example 1: We first consider the domain (with smooth, C 2 boundary) depicted
in Fig. 7. For an 80x80 grid, we use simple iteration to solve the discrete equations
within a relative residual error of 10 \Gamma10 . We apply the integral equation preconditioner
both with and without the relaxation step, and vary the number of boundary points
used to solve the integral equation. The resulting iteration counts are given in Table
1. We observe that the addition of the relaxation step increases the effectiveness of
the preconditioner (as expected), and that the number of iterations needed to achieve
our tolerance is quite low (5-7 iterations for kA~x\Gamma ~ bk
Furthermore, we see that
the number of boundary points used in the integral equation step can be significantly
reduced while maintaining the effectiveness of the preconditioner. This illustrates that
integral equation preconditioning can be efficient since relatively few points are needed
to solve the integral equation.
Fig. 7. A domain with a smooth boundary.
Example 2: Our second example compares the effectiveness of the preconditioner
for two different iterative solvers (simple iteration and FGMRES [23]) and for different
grid refinements. Starting with the same smooth domain (Fig. 7), we formulate the
discrete equations for four grid refinements. In each case we solve the discrete equations
up to a tolerance of 10 \Gamma10 . Both iterative solvers are preconditioned using the integral
equation procedure with relaxation, and the results are listed in Table 2. We see that
with this preconditioner, simple iteration is just as effective a solver as FGMRES, and
this allows us to solve the discrete equations using less memory and fewer computations.
This example also demonstrates that the convergence of the preconditioned iterative
methods is independent of the grid refinement. This is expected since the preconditioner
is based on a solution procedure for the underlying equation.
Example 3: In this example, we test our method on a domain with corners (Fig.
8 ). This geometry represents the cross section of three traces in an integrated circuit
chip with deposited layers and undercutting. In this situation, the Poisson solver can be
used to extract electrical parameters such as the capacitance and inductance matrices.
We formulate the discrete equations for a 40x40 grid, and apply the integral equation
preconditioner with and without relaxation. Since we no longer have a C 2 boundary,
we do not meet the smoothness assumptions that our preconditioner requires. In fact,
for this problem in which sharp corners are present, the effectiveness of the integral
equation solver as a preconditioner deteriorates. One finds an increase in the required
number of iterations, an increase which is not reduced by improving the accuracy of the
integral equation solution component. This problem occurs because of the large discrepancy
which exists between integral equation solutions and finite difference solutions
Table
Iteration count: different boundary points used to solve integral equation, 80x80 grid (smooth boundary),
and stopping criterion kA~x\Gamma ~ bk
# of Preconditioner
Boundary pts add relaxation no relaxation
per object
Table
Iteration count: 80 boundary points(per object) used to solve integral equation, different grids (smooth
boundary), and stopping criterion kA~x\Gamma ~ bk
Iterative Method
Grid simple iteration FGMRES
for domains with corners. (The integral equation technique more rapidly captures the
singularities of the solution). To remedy this, we fitted a periodic cubic spline to the
boundary and passed this smoother boundary to the integral equation component. The
results are presented in Table 3. With these adjustments, we see essentially the same
behavior (few iterations and boundary points required) as for the smooth domain, and
we conclude that integral equation preconditioning can be effective for domains with
corners as well.
Fig. 8. The cross section of three traces on an IC chip with depositing and undercutting.
Table
Iteration count: different boundary points used to solve integral equation, 40x40 grid (boundary with
several corners), and stopping criterion kA~x\Gamma ~ bk
# of Preconditioner
Boundary pts add relaxation no relaxation
per object
4. Conclusion. In this paper we've shown that integral equation solvers can be
used as effective preconditioners for equations arising from spatial discretizations of
Poisson's equation. In fact, they are so effective as preconditioners that simple iteration
can be used; more sophisticated iterative procedures like GMRES [24] are not required.
However, the difference in discretization procedures leads to large residuals near the
boundaries; and we found that the addition of a relaxation step is an effective mechanism
for alleviating this problem. Additionally, the use of a relaxation step allows one to
coarsen the discretization of the integral equation without significantly increasing the
number of iterations.
Another aspect of this paper is the use of a "cut-out" grid discretization. We've
found that with the addition of an intermediate software layer which exploits polymor-
phism, the task of constructing the equations can be greatly simplified. Our construction
method works particularly well with "cut-out" grid discretizations because only modest
functionality of the intermediate software layer is required. The key primitive functions
being a test if a point is inside or outside a given domain and the determination of the
intersection point of a segment with an object boundary.
Both aspects of this paper have applications to other equations; in particular their
use in the context of solving the incompressible Navier-Stokes equation is discussed in
[12]. While we have concentrated on two dimensional problems, in principle, the ideas
apply to three dimensional problems as well.
Acknowledgment
. The authors would like to thank Dr. Anita Mayo for her
generous assistance with the rapid integral equation evaluation techniques used in this
paper.
A. Integral equation details:. The first step in constructing the solution of (18)
is the formulation of an appropriate integral equation, and for this we use the results of
[9, 19]. Given one bounding contour and M inner contours (where M?0), a solution is
sought in the following form (here j is the outward pointing normal, as shown in Fig.
Z
@\Omega OE(y)
@
(2-
log
We add M constraints to specify the M log coefficients:
Z
Applying the boundary conditions leads to a uniquely solvable integral equation [19].2
R
@\Omega OE(y) @
log
R
The equations for the unbounded case (Fig. 2) are similar, see [9, 19] for details.
The integral equation is solved numerically using the Nystr-om method [11, 20] (in
engineering terms, this amounts to a collocation approach where delta functions are
used to represent the unknown charge density OE). We discretize this integral equation
using the Trapezoidal rule (because of it's simplicity and spectral accuracy when used
with closed smooth contours). If we sample n k boundary points on the kth contour
then the discretized integral can be written as a simple sum.
(Here h i represents the average arclength of the two boundary intervals that have x k
as an endpoint.)
Z
OE(y)
@
(2-
log
@
(2-
log
Next, we enforce the integral equation at each of the sampled boundary points and
apply the quadrature rule. When the integration point coincides with the evaluation
point
the kernel has a well defined limit. (Here -(x is the curvature of the
contour at x
lim
x!xo
@
(2-
log
These approximations reduce the integral equation (and constraints) to a finite
dimensional matrix equation which can be solved for the log coefficients and the charge
densities at the sampled boundary points.
D con L con
OE
~
A
where
~
represents the discrete contribution of the double layer potentials, L cntr
the
effects of the log terms, D con
holds the discrete density constraints, L con
has the
constraints on the log terms (a zero matrix for the case of a bounded domain), and I
is the identity matrix.
The linear systems (27) associated with the integral equation correction are solved
using Gaussian elimination. This direct matrix solver was employed for simplicity of
development and because, for the test problems, the total time of the Gaussian elimination
procedure was a small fraction of the total computing time. (Hence, increasing it's
efficiency would have little impact). For problems with a large number of sub-domains
the operation count of direct Gaussian elimination is highly unfavorable and procedures
such as the Fast Multipole Method (FMM) [6, 9, 10, 21] should be used.
A.1. Evaluation of integral representation:. After solving for the charge densities
and log coefficients, the function given by (22) must be evaluated at the nodes of
a Cartesian "cut-out" grid. The simplest approach is to apply a quadrature method to
(22) and evaluate the resulting finite sum; however, this procedure is computationally
expensive since this sum must be evaluated for each interior grid point. One way of
accelerating the evaluation process is to apply the FMM, which can be used to evaluate
our integral representation at a collection of points in an asymptotically optimal way.
However, because of the large asymptotic constant involved, using the FMM can still
be fairly expensive. Therefore we choose to use a method [3, 14, 15, 16] that relies on
a standard fast Poisson solver to do the bulk of the computations. As reported in [17],
this approach is (in practice) faster than using the FMM.
The key idea in this method is to construct a discrete forcing function and discrete
boundary conditions so that the solution of
provides the desired function values at the nodes of the rectangular Cartesian grid. (In
our procedure we take R to be the rectangular domain used in the construction of the
Cartesian "cut-out" grid.) Efficiency is obtained through the use of a Fast Poisson solver
(e.g. we used HWSCRT from FISHPAK [1]) and the use of computationally inexpensive
procedures to construct the requisite discrete boundary and forcing functions.
The boundary values, g d
ij , are obtained by applying the trapezoid rule to (22). This
is computationally acceptable because it is only done for those points that lie on @R.
(Multipole expansions can be used to make this computation more efficient.)
For the construction of the forcing terms, f d
, one notes that the Laplacian of
the function (22) is identically zero (both log sources and double layer potentials are
harmonic) away from the boundary, so the discrete Laplacian at points away from the
boundary will be approximately zero. In particular, at the regular points a standard
truncation error analysis yields the following result:
xxxx
yyyy
xxxx
yyyy
If the fourth derivatives of the function are bounded, zero is a second order approximation
to the discrete Laplacian. The function (22) is the sum of a double layer potential
and isolated log sources. Under rather mild assumptions concerning the contours and
charge densities, double layer potentials have bounded fourth derivatives and so one is
justified in approximating the contribution to the discrete Laplacian from that component
by zero. Therefore, the double layer potential contributions to the discrete
Laplacian only have to be calculated at the irregular grid points. The log terms do
not have globally bounded fourth derivatives, and the calculation of their contribution
requires separate treatment (which will be discussed below).
As discussed in [14], at irregular points the task of creating an accurate discrete
Laplacian of a double layer potential requires accounting for jumps in the solution values
which occur across the boundary of the domain. If the east, west, south, north, and
center stencil points are denoted by x e respectively, we decompose
the discrete Laplacian into four components.
If none of the stencil arms intersect the boundary, then the standard error series analysis
produces (30). At an irregular point, x i;j , the discrete Laplacian stencil will intersect
the boundary along one or more of its stencil arms, and thus the exact Laplacian will
not be an accurate approximation to the discrete Laplacian. In order to improve the
approximation, a careful Taylor series analysis (one which accounts for the jump in
solution values across the interface) is constructed to determine how to compensate for
errors introduced by these jumps.
Specifically, when considering a stencil arm that intersects the boundary, we will
refer to the two grid points that comprise the stencil arm as x c and x nbr (where x c
still refers to the center point, while x nbr will represent any of the four remaining stencil
points will denote the axis direction along x c and x nbr (i.e.
x for horizontal stencil arms, or y for vertical arms), and x represents the boundary
intersection point (see Fig. 9). We introduce the notation [ to represent the
x
dW
x nbr
x *
c
Fig. 9. Generic description of a stencil arm intersecting the boundary. (here
jump across the boundary from the side containing x nbr to the side containing x c (i.e.
With this notation, the contributions of the boundary intersection to the discrete
Laplacian at x c can be given as follows (the derivation follows from the procedure
presented in [14]):
This formula can be applied to all four stencil arms (with
or When we substitute (32) into (31), we obtain a first
order approximation to the discrete Laplacian given in terms of the jump values of the
solution and the jumps in its first and second partial derivatives. For a double layer
potential, these jump terms can be accurately computed directly from the charge density
and its derivatives. Following the analysis presented in [14], we collect the needed jump
equations. If we assume that the boundary is parameterized by a parameter s, then
the boundary intersection point can be written as x and the
charge density at that point as OE j OE(x (s)). Furthermore, we introduce another
to represent the jump across the boundary from a point just outside
the domain to a point just inside the domain (i.e. [u IE
j(s)),
points out of the domain). In this
notation, the parameterized jump terms are given by the following formulas:
x
OE
y
OE
x -
y
OE
xx ]:
In order to relate the [ definition (from exterior to interior) to the [
(from the neighbor side to the center side), we check to see whether the center point is
interior or exterior to the domain.
By using (32-34), we can approximately compute the discrete forcing terms at the
irregular points without having to do any solution evaluations at all. This increases
the speed of this approach since only local information is used (we avoid summing over
all boundary points), and furthermore, this approach does not lose accuracy for grid
points near the boundary (as direct summation approaches tend to).
In the computation of the discrete Laplacian of the function component associated
with log terms, there are no boundary intersections to interfere with the Taylor series
analysis, and no jump terms are needed. However, the derivatives of log sources are
unbounded as you approach the source point, so zero is not an accurate approximation
to the discrete Laplacian for points near the log source. The discrete Laplacian is
therefore explicitly computed for points which are within a radius of d / h 1=4 about the
log source, and set to be zero outside of this radius. (For a point outside this radius,
zero is a first order approximation to the discrete Laplacian)
Therefore, for both the log terms and the double layer potential, we can approximate
the discrete Laplacian at all grid points by only doing some local calculations near the
boundary and the log sources. Once the discrete forcing terms f d
and the boundary
values g d
i;j are known, a standard fast Poisson solver will rapidly produce the solution
values at all of the Cartesian grid points. This approach produces a second order
approximation to u IE (x i;j ).
--R
A cartesian grid projection method for the incompressible Euler equations in complex geometries.
A method of local corrections for computing the velocity field due to a distribution of vortex blobs.
An algorithm for the simulation of 2-D unsteady inviscid flows around arbitrarily moving and deforming bodies of arbitrary geometry
An adaptive Cartesian mesh algorithm for the Euler equations in arbitrary geometries.
A fast adaptive multipole algorithm for particle simulations.
Berkungen zur fehlerabsch-atzung f?r das differenzenverfahren bein partiellen differ- entialgleichungen
Laplace's equation and the Dirichlet- Neumann map in multiply connected domains
A fast algorithm for particle simulations.
Linear Integral Equations.
Incompressible navier-stokes flow about multiple moving bodies
Personal communication.
The fast solution of Poisson's and the biharmonic equations on irregular regions.
Fast high order accurate solution of Laplace's equation on irregular regions.
The rapid evaluation of Volume
A fast Poisson solver for complex geometries.
3D applications of a Cartesian grid Euler method.
Integral Equations.
Rapid solution of integral equations of classical potential theory.
A flexible inner-outer preconditioned GMRES algorithm
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems.
A second-order projection method for the incompressible Navier-Stokes equations in arbitrary domains
A Survey of Numerical Mathematics
An adaptively refined Cartesian mesh solver for the Euler equations.
--TR | poisson;integral equation;multiply connected;iterative;preconditioning |
339893 | On-the-Fly Model Checking Under Fairness that Exploits Symmetry. | An on-the-fly algorithm for model checking under fairness is presented. The algorithm utilizes symmetry in the program to reduce the state space, and employs novel techniques that make the on-the-fly model checking feasible. The algorithm uses state symmetry and eliminates parallel edges in the reachability graph. Experimental results demonstrating dramatic reductions in both the running time and memory usage are presented. | Introduction
The state explosion problem is one of the major bottlenecks in temporal logic model
checking. Many techniques have been proposed in the literature [6, 5, 9, 8, 13, 11,
12, 16, 17] for combating this problem. Among these, symmetry based techniques
have been proposed in [5, 9, 13]. In these methods the state space of a program
is collapsed by identifying states that are equivalent under symmetry and model
checking is performed on the reduced graph. Although the initial methods of [5, 9]
could only handle a limited set of liveness properties, a more generalized approach
for checking liveness properties under various notions of fairness has been proposed
in [10]. This method, however, does not facilitate early termination, it supplies an
answer only after the construction of all the required data structures is complete.
Many traditional model checking algorithms ([3, 11, 12, 17]) use on-the-fly techniques
to avoid storing the complete state space in the main memory. However,
none of these techniques employ symmetry. [13] uses on-the-fly techniques together
with symmetry for model checking. There the focus is on reasoning about a simple
but basic type of correctness, i.e., safety properties expressible in the temporal logic
CTL by an assertion of the form AG:error.
In this paper, we present an on-the-fly model checking algorithm that checks for
correctness under weak fairness and that exploits symmetry. computation is said
* A preliminary version of this paper appeared in the Proceedings of the 9th International
Conference on Computer Aided Verification held in Haifa, Israel in June 1997. The work presented
in this paper is partially supported by the NSF grants CCR-9623229 and CCR-9633536
to be weakly fair if every process is either infinitely often disabled or is executed
infinitely often). This work is an extension of the work presented in [10]. Here
we develop additional theory leading to novel techniques that make the on-the-
fly model-checking feasible. We not only exploit the symmetry between different
states, but also take advantage of the symmetric structure of each individual state;
this allows us to further reduce the size of the explored state space.
The other major improvement is gained by breaking the sequential line of the
algorithm. The original algorithm constructed three data structures (the reduced
state space, the product graph and the threaded graph - details are given below)
one after the other and performed a test on the last one. We eliminated the construction
of the third data structure by maintaining some new dynamic information;
the algorithm constructs parts of the first structure only when it is needed in the
construction of the second; finally we store only the nodes in the second structure.
This on-the-fly construction technique and up-to-date dynamic information maintenance
facilitates early termination if the program does not satisfy the correctness
specification and allows us to construct only the minimal necessary portion of the
state space when the program satisfies the correctness specification. The on-the-fly
model checking algorithm has been implemented and experimental results indicate
substantial improvement in performance compared to the original method.
The algorithm, given in [10], works as follows. It assumes that the system consists
of a set I of processes that communicate through shared variables. Each variable is
associated with a subset of I , called the index set, that denotes the set of processes
that share the variable. Clearly, the index set of a local variable consists of a single
process only. A state of the system is a mapping that associates appropriate values
to the variables. A permutation - over the set I of processes, extends naturally
to a permutation over the set of variables and to the states of the system. A
permutation - is an automorphism of the system if the reachability graph of the
system is invariant under - (more specifically, if s ! t is an edge in the reachability
graph then -(s) ! -(t) is also an edge and vice versa). Two states are equivalent
if there is an automorphism of the system that maps one to the other. Factoring
with this equivalence relation compresses the reachable state space.
The original method consists of three phases. First it constructs the reduced
state space. Then it computes the product of the reduced state space and the finite
state automaton that represents the set of incorrect computations. It explores
the product graph checking for existence of "fair" and "final" strongly connected
components; these components correspond to fair incorrect computations. Checking
if a strongly connected component is fair boils down to checking if it is fair with
respect to each individual process. This is done by taking the product of the
component and the index set I . The result is called the threaded graph resolution
of the component. A path in the threaded graph corresponds to a computation of
the system with special attention to one designated process. Fairness of a strongly
connected component in the product graph is checked by verifying that each of the
strongly connected components of its threaded graph are fair with respect to the
designated process of that component.
Our on-the-fly algorithm has two layers: the reduced state space and the product
graph construction. The successors of a node in the reduced state space are constructed
only when the product graph construction requests it. The product graph
construction is engined by a modified algorithm for computing strongly connected
components (scc) using depth first search (see [2]). During the depth first search,
with each vertex on the stack it maintains a partition vector of the process set I .
The partition vector associated to a product state u captures information about
the threaded graph of the strongly connected component of u in the already explored
part of the product graph. Intuitively, if processes i and j are in the same
partition class then it indicates that the nodes (u; i) and (u; are in the same scc
of the threaded graph. This reveals that the infinite run of the system corresponding
to the strongly connected component of u in the already explored part of the
product graph is either fair with respect to both processes i and j or it is not fair
with respect to any of the mentioned processes. The partition vectors are updated
whenever a new node or an edge to an already constructed node is explored. The
correctness of the above algorithm is based on new theory that we develop as part
of this paper; this thoery connects the partition vectors of the algorithm with the
strongly connected components of the threaded graph.
A permutation - 2 AutM on processes is a state symmetry of a state s, if
(state symmetry was originally introduced in [9]). Suppose that - maps process i
to j. In that case, transitions ignited by process i are in one-to-one correspondence
with those caused by process j. Hence, we can save space and computation time
by considering only those that belong to process i. This is one of the forms state
symmetry is exploited in our algorithm. Another way is the initialization of the
partition vector with the state symmetry partition. If - maps process i to j then
the threads corresponding to process i and j are certainly in the same situation:
they are either both fair or none is fair.
Our paper is organized as follows. Section 2 contains notation and preliminaries.
In Sect. 3 we develop the necessary theory and present the on-the-fly algorithm.
We describe various modifications of the algorithm that take state symmetry into
consideration. Section 4 presents experimental results showing the effectiveness of
our algorithm and dramatic improvements in time as well as memory usage. Section
5 contains concluding remarks.
2. Preliminaries
2.1. Programs, Processes, Global State Graph
Let I be a set of process indices. We consider a system
running in parallel. Each process K i is a set of transitions. We assume that
all variables of P are indexed by a subset of I indicating which processes share
the variable. A system P , that meets the above description, is called an indexed
transition system (or briefly program).
A global state of an indexed transition system is an assignment of values to the
variables. We assume that each variable can take only a finite number of values.
This assumption ensures that the number of global states of the system is also
finite. We define an indexed graph M on the set of global states that captures the
behavior of the program. The indexed global state graph is
is the set of global states, s 0 is the initial state, and R ' S \Theta I \Theta S is the transition
relation, i.e., s i
there is a transition in process i that is enabled in state
s and its execution leads to state t.
2.2. Strongly Connected Subgraphs and Weak Fairness
Infinite paths in M starting from the initial state denote computations of P . An
infinite path p in M is weakly fair if for each process i, either i is disabled infinitely
often in p or it is executed infinitely often in p. Unless otherwise stated, we only
consider weak fairness throughout the paper. The implementation, however, is
capable of handling strong fairness. (An infinite execution of the program is strongly
fair if every process that is enabled infinitely often is executed infinitely often.)
A strongly connected subgraph of a graph is a set of nodes such that there is a
path between every pair of nodes passing only through the nodes of the subgraph.
A strongly connected component (scc for short) is a maximal strongly connected
subgraph.
The set of states that appear infinitely often in an infinite computation of a finite
state program P forms a strongly connected subgraph of M . Many properties
of infinite computations, such as fairness, are in one-to-one correspondence with
properties of the associated strongly connected components of M . Therefore, all of
our efforts will be directed towards finding an scc of M with certain properties.
We can formulate weak fairness as a condition on scc's.
Definition 1. An scc C of M is weakly fair if every process is either disabled in
some state of C or executed in C. (Process i is disabled in state s if all of its
transitions are disabled in s; process i is executed in C if there are states s; t in C
such that s i
For a given program P , we are interested in checking the absence of fair and
incorrect computations. Here we assume that incorrectness is specified by a Buchi
automaton A; the set of computations accepted by A is exactly the set of incorrect
computations. The problem of checking whether the program P has any incorrect
weakly fair computation can be decided by looking at the product graph B 0 of M
and A. If B 0 has an scc whose A projection contains a final automaton state and
whose M projection is weakly fair then P has a weakly fair incorrect computation.
Here it is sufficient if we construct the product of the reachable part of M and A.
2.3. Annotated Quotient Structure
The previously suggested method, i.e., of analyzing the reachable part of M , can
be very expensive since, for many systems, the reachable part of M is huge. For
systems that exhibit a high degree of symmetry, the state space can be reduced
by identifying states that are equivalent under symmetry and by constructing the
quotient structure as given below.
Denote the set of all permutations on I by Sym I , let - 2 Sym I . - induces
an action on the set of variables and on the set of global states in the following
way. For every variable v
, its image under - is -(v
course,
may not be a variable of the program. We say that - respects the
set of variables if the image of every variable is a variable of the program. Assume
that - has that property. The image of a global state s under - is defined to be
the global state -(s) that satisfies that the value of
in s is the same as
the value of -(v
in -(s) for every variable v
of the program. We say
that - is an automorphism of the indexed global state graph M if - respects the
exactly when -(s) -(i)
R. The set
of automorphisms of M is denoted by AutM . Certainly, AutM is a subgroup of
Sym I .
Given any subgroup G of AutM , we can define an equivalence relation on S.
State s is equivalent to t if there is a - 2 G such that t. Using G, M can
be compressed to a smaller structure called the annotated quotient
structure (AQS) for M , as follows.
ffl S is a set of representative states that contains exactly one state from each
equivalence class of S=G, in particular, it contains s 0 itself from s 0 's class.
ffl R is a set of triples s -;i
\Gamma! t denoting edges between representative states annotated
with permutations from G and with process indices. To define R
formally, with each state t 2 S, we define t rep to be the unique representative
state of the equivalence class of t; with each such t, we also associate
a canonical permutation - t 2 G such that
Rg.
Remark. In many cases it is useful to allow multiple initial states to capture
nondeterminism of the program. In [10] M is defined to have a set S 0 of initial
states and it is required that for every automorphism - and s
concept of multiple initial state can be simulated in our system by introducing a
new, fully symmetric initial state s 0 and for every s 2 S 0 and i 2 I an edge s 0
2.4. Checking for Fairness and the Threaded Graph
We briefly outline the approach taken in [10] for checking if a given concurrent
program P exhibits a fair computation that is accepted by an automaton A. Assume
that the automaton A refers to variables whose index set involve processes
specifies a property of the executions of these processes
only. These processes are called global tracked processes. If we traverse in
the compressed M these global tracked processes are represented by different sets
in each state of the path. Formally, the local tracked processes in a state t after
passing through the path p are - is the product of
the permutations found on the edges of p. For example, if the first edge of the
path is s 0
\Gamma! t then, since t is only a representative of the real successor state, the
global tracked processes are represented by the processes -
t. After some steps in the path we may return to state t again but at that time
we encounter a different set of local tracked processes. This property makes M not
feasible for model checking purposes. M is too compact, we need a less compressed
version of M where the set of local tracked processes in a given state t does not
depend on the path that lead to t from the initial state. We need to unwind M
partially; the threaded graph construction captures this unwinding.
be any graph whose edges are labeled with permutations of a set
I , and possibly with other marks. The k-threaded graph H k-thr corresponding to H
is hV \Theta I k consists of edges of the form (s;
\Gamma!
k. Note that if H has
labels on its edges (denoted by - in the previous line) other than the permutations
of a I then H k-thr inherits them. The 1-threaded graph corresponding to H is
denoted by H thr . The second component in a state (s; i) of H thr is called the
designated process. The following simple example depicts these concepts. Here, and
throughout the paper, id denotes the identity permutation, - ij the transposition
that interchanges i and j.
Graph H
A
Graph H thr (edge labels are not indicated)
@
@
@
@I 6
@
@
@ @R
\Gamma\Psi
Figure
1. The threaded graph construction
The original algorithm first constructs the annotated quotient structure M corresponding
to P . In the second step, a product graph
\Theta A is
constructed. Each state of this product graph is of the form (s;
is an edge of B 0 if (s;
\Gamma!
is an edge of M k-thr
and the automaton A has a transition from
state a to a 0 on the input consisting of the program state obtained by simultaneously
replacing index c l by i l for each l 2 k.
In the third step the product graph B 0 is checked for existence of fair strongly
connected components (these are called subtly fair sccs in [10]). This checking is
done by constructing the threaded graph resolution of every scc of B 0 . Every scc
of the threaded graph is checked if it is fair with respect to the designated process.
In Sect. 3 below we show that, using techniques based on new and deeper theoretical
results, the above method can be considerably enhanced.
OEAE
"!
#/
iii
rii
rrr
OEAE
oe id; 1id\Gamma
\Gamma\Psi
cri
\Gamma\Psi
crrid6
\Gamma\Psi
Figure
2. The AQS M for the simplified Resource Controller
StartingState A0;
FinalState
True
oe STATE[1] != C
OEAE
ae-
oe True
Figure
3. The automata A
2.5. The Simplified Resource Controller Example
To illuminate our general concepts we present the instructive example of the simplified
Resource Controller.
The program consists of a server and 3 client processes running in parallel. Each
client is either in idle (i), request (r) or critical (c) state. The variable STATE[c]
indicates the status of client c ( 3). Clients can freely move between idle
and request state. The server may grant the resource to a client by moving it to
critical state, provided that that client is in request state and no client is in critical
state yet.
For this simple example, M has 20 states (all combinations of values i, r and c
except those that contain more than one c.) As a contrast, M has only 7 states.
(as illustrated in Fig. 2.)
The initial state is the one marked with iii in the lower left side of the figure.
All three process are in idle (i) state. Any of them can move to request state (r),
hence it has three successors in the state space: rii, iri and iir. All of these states
are equivalent, we chose rii to be the representative of them. The three edges
departing from iii correspond to the three enabled transitions. Edge iii -01 ;1
\Gamma! rii
for example indicates that process 1 has an enabled transition and the execution
of that transition leads to state - \Gamma1
Similarly, state rri represents 3
states: itself, rir and irr.
Suppose that we want to check the (obviously false) property that Client 1 never
gets to critical state. The negation of that property can be captured by the automaton
given in Figure 3; this automaton states that Client 1 eventually gets
to a critical state. The global tracked process is 1. The product graph B 0 has
states. A depth first search on B 0 reveals that it has a strongly
connected subgraph that is weakly fair and contains a final automaton state.
3. Utilizing State Symmetry
The original algorithm, briefly described in subsection 2.4, constructed B together
with the threaded graph B thr
0 . This method can be improved by applying the
following three new ideas.
In constructing M k-thr
our goal was to define a less compressed version of M with
the property that if we visit a state t multiple times by an infinite path then we
encounter the same set of local tracked processes. In that sense, we can link the set
of local tracked processes to that state. The k-threaded graph unwinding of M was
not the optimal solution. We can define an equivalence relation on M \Theta I k that
usually results in greater compression: M \Theta I k still has the desired property, and
it is smaller than M k-thr
in cases where the program exhibits some symmetry. (It
is possible for two states (s;
to be equivalent
and be represented by a single state in M \Theta I k .)
The second improvement is the application of an on-the-fly algorithm. Here we
incrementally construct B and simultaneously explore it. By this exploration we
analyze the threaded graphs without constructing them. If the partially explored B
contains a required subgraph then the algorithm immediately exits saving further
computation time. Because of the on-the-fly nature of the algorithm, we do not
need to store the complete B. Specifically, no edges need to be stored.
Finally, the third idea is to use the symmetry of a single global state. Using state
symmetry we can reduce the number of edges by eliminating the redundant parallel
ones. Such redundant parallel edges can be removed from M also. This results in
further reduction in memory usage.
For keeping the presentation simple, we assume that we are tracking only one
process. Doing so, we do not loose generality. All the results, that are presented
below, apply (with the obvious modifications) to the case with many tracked pro-
cesses. In the actual implementation of the algorithm given below, we used the
general case.
3.1. Compressing M \Theta I
In Subsect. 2.3 we defined an equivalence relation on S. Now, we extend it to
S \Theta I as follows. We say that are equivalent if there is a
permutation - 2 G such that Obviously,
this equivalence relation partitions the set S \Theta I into a set of equivalence classes.
Let S aqsi be a set of representative states that contains exactly one state from
each equivalence class. To ensure that S aqsi and S are closely related we adopt the
convention that (s; i) 2 S aqsi implies s 2 S, that is, S aqsi ' S \Theta I . This containment
may be strict as it is possible for two states of the form (s; i) and (s; j) to be
equivalent. The annotated quotient structure of M with a tracked index (AQSI) is
\Gamma! (t;
(i)g.
In a state (s; i), i is the local tracked process. Note that the indicated initial state
s 0 is formaly not an element of the set of states S aqsi , no specific tracked process
is assigned with it. This seemingly unnatural definition was adopted because we
did not want to encorporate information on the automaton A into the definition of
M \Theta I .
\Theta I can be considerably smaller than M thr
. In the best
case, we may achieve a reduction in the number of nodes and number of edges by
a factor of n and n 2 , respectively. (Here, n is the size of I.)
In our simplified Resource Controller example M thr has 7 states. To
calculate the size of S aqsi note first that in state iii all 3 processes are in the same
local state, hence (iii, 0), (iii, 1) and (iii, 2) are equivalent in M \Theta I so only one
of them needs to be included in S aqsi . Similarly, only one of (rrr, 0), (rrr, 1)
and (rrr, 2) is in S aqsi . If s is any of rii, rri, crr or iic then two of (s; 0), (s; 1)
and (s; 2) are equivalent, hence only two need to be stored in S aqsi . Finally, in
the case of s = cri all processes are in different local state, therefore, all (s; 0),
(s; 1) and (s; 2) should be in S aqsi . This counting shows that S aqsi contains only 13
representative states.
Let B be the product of M \Theta I and A. Formally,
0 with the property that (s; i 0
0 ) is the representative of
. Recall that i 0 is the process being tracked by the automaton A.
R pr consists of edges (s; i; a) -;l
\Gamma! (t; j; a 0 ) such that (s; i) and (t; are in S aqsi ,
(s; i) -;l
\Gamma! (t; and the automaton A moves from state a to a 0 on the input
gained from s after replacing all occurrences of index i with i 0 .
Definition 2. Let C be an scc of B. An scc D of C thr is weakly fair if there is
a state (u; k) in D such that process k is disabled in u or D has an edge of type
Using slightly modified versions of arguments found in [10] we deduce that B
contains all the information needed to decide if the program P satisfies the complement
of the property given by A. This is stated in the following theorem which
can be proved exactly on the same lines as theorem 3.3 and lemma 3.8 of [10] by
using our new definition of B.
Theorem 1 P satisfies the complement of the property defined by A if and only if
there is no scc C of B such that C contains a final automaton state and every scc
of C thr is weakly fair.
4. On-the-Fly Model Checking
The main contribution of the present paper lies in showing that we can search for
an scc of B without requiring the complete B to be previously constructed. B can
be explored while we are constructing it in an on-the-fly manner.
As it was mentioned earlier, B is constructed to be the product of M \Theta I and
A. One of the first improvements is that we do not construct M \Theta I but only the
smaller M . By the construction of B we implicitly create M \Theta I and store it as part
B. This is based on the observation that, after a careful choice of representative
states, M \Theta I is a threaded graph resolution of M . By the construction, each of the
nodes of the threaded graph are checked for equivalence against all the nodes stored
already (implicitely, as part of B). The new node is stored only if it is the first
one in its equivalence class. This is done in command 6 of the algorithm presented
below.
In our implementation we can control the way M is constructed. In the default
(and most efficient) case the successors of an M state together with the edges leading
to them are created (stored) when they are first needed in the construction of B. If
we want to avoid storing the edges of M we can use the second option that recreates
them temporarily each time when a B state construction requires it. As a third
possible option, the implementation allows us to construct M in advance. This can
be usefull when the program is tested against multiple correctness properties.
The second component, used by the construction of B, is the automaton A representing
the correctness property. As A is small in size compared to other data
structures involved, its construction in an on-the-fly manner is not motivated.
After these general introductory lines, let us turn to presenting the actual al-
gorithm. As explained earlier, our on-the-fly model-checking algorithm explores
simultaneously as it constructs it. During this process, in order to analyze the
threaded graph without explicitly constructing it, we maintain a partition of I with
each B node on the stack. This partition indicates which processes are known to
be in the same strongly connected component of the threaded graph.
4.1. Partitions
First, we would like to adopt the following conventions concerning partitions. We
identify equivalence relations and the corresponding partitions on a given set. In
that sense, we say that a partition contains another partition if the equivalence
relation corresponding to the first partition is a superset of the equivalence relation
corresponding to the second partition. The join of two partitions is the smallest
partition containing both.
The following two lemmas prove some important properties of sccs in B.
C be an scc in B. Then the following properties hold.
ffl If are nodes in C and there is a path from (r; i) to (r thr then there
is a path from (r 0 ; j) to (r; i) as well.
ffl The sccs in C thr are disjoint, i.e., no two distinct sccs are connected by a path
in C thr .
Proof: The first part of the lemma is proved as follows. Assume that are
nodes in C and there is a path from (r; i) to (r 0 ; in C thr . This means that there
exists a path p from r to r 0 in C such that is the product
of all the permutations on the path p. Since C is an scc, there exists a path
p 0 from r 0 to r in C. Let - 0 be the product of the permutations on p 0 . There
exists an n ? 0 such that (- is the identity permutation. Now, consider
the path (p This path creates a cycle in C thr starting from (r; i) back to
passing through (r obviously, this cycle contains a path from (r 0 ;
to (r; i). The second part of the lemma follows trivially from the first part.
a) be a state in B and C be the scc of B that contains r. We define
the equivalence relation
- r on I as follows:
are in the same component of C thr .
It is easy to see that a class of the partition
- r identifies a unique component of
thr , and every component of C thr is identified by a class of
- r . Thus, we will use
these partitions to represent the sccs of C thr . A class of
- r is called weakly fair if
the corresponding scc of C thr is weakly fair. Note that the tracked process l in r
always forms a class of size 1.
Suppose that if r and r 0 are nodes in the same scc in B. The partitions of
- r and
are not equal in most cases, but fortunately one can be obtained from the other
by a permutation belonging to G. This problem motivates the use of a common
referential base. A possible nominee for this is the initial state s pr of B.
Assume that we have explored B in a depth first manner starting from the initial
state, until all reachable states have been visited. Let T be the resulting depth
first spanning tree. Now, for each state u 2 B, let - u denote the product of the
permutations on the unique s pr ! u path in T . Now, for each state r in B, we
define an equivalence relation - r on I as follows.
r
r
are nodes in the same scc C of B then -
Proof: To prove the lemma, it is enough if we show that, for every
implies vice versa. We show this by proving that, for every i,
r
and (r
are in the same scc in C thr . This will automatically imply the
following: for every i and j, if i - r j, i.e., there is a path in C thr from
r
to
r
(j)), then there is also a path from (r
r 0 (j)) and hence
To show that
r (i)) and (r
are in the same scc in C thr , we take the
following approach. Let u be the root of the scc C, i.e., u is the first node in C
that was visited during the depth-first search that induced the forest F . Let T be
the tree in F that contains u and r, and let the initial state s pr be the root of T .
It can be shown that the unique path in T from s pr to r passes through u (see
[7]). Hence, there exists a path in C thr from (u; - \Gamma1
(i)) to
r
(i)). Hence, by
Lemma 1, we see that these two nodes are in the same scc in C thr . By a similar
argument, we see that
are in the same scc. This shows
that
r
are in the same scc.
Intuitively, indicates that the threads of (s pr ; i) and (s pr ; enter the same
scc of the threaded graph after they passed r. To illustrate these concepts consider
the subgraph of the simplified Resource Controller example depicted in Figure 4
below. The tree edges are denoted by boldface arrows.
OEAE
s pr
OEAE
OEAE
OEAE
\Gamma\Psi
Figure
4. A strongly connected subgraph of the product graph B
The nodes strongly connected subgraph. In u 1 we have
is an immediate successor of both in the
threaded graph. Hence,
- u1 is 0 12 . Similarly,
- u2 is 0 12 , but
- u3 is 1 02 .
Using that -
Returning to the general argument, we show how to compute - r by exploring
using depth first search. For each edge
\Gamma! v in B, let - e denote the
permutation - u
. Note that if e is an edge of T then - e is the identity
permutation. The permutation - e satisfies the following property.
e is an edge in the scc C containing r, then - e
Proof: Since T is a depth first search tree, it enters an scc of the graph in a unique
vertex. According to Lemma 2, - in the same scc, therefore, we
may assume that r is the root of the scc that contains e. Hence, there are paths p 1
and p 2 on T from r to both endpoints u; v of e. Let - be the products of all
the permutations on the paths p 1 and p 2 respectively. It should be easy to see that
r
r
Now, to demonstrate that i - r j, it is enough if we show that there is a path
in C thr from
r
(j)) to
r
(i)). By substituting - u
(i) for j and
replacing - \Gamma1
r
r
(i). Let - 00 denote the
permutation
r
Hence - \Gamma1
r
r
(i).
Since r and v are in the same scc, there exists a path p 0 from v to now, the
path cycle and there exists an n ? 0 such that (p 2 is also a cycle
and the product of all the permutations on this cycle is the identity permutation.
This big cycle can be written as
2 is the path
it should be obvious that the product of all the permutations on p 0
.
Finally, consider the cycle
2 in C. The product of all permutations on
this cycle equals - 1
2 , which is - 00 . This cycle creates a path in C thr from
r
(i)) to
r
(i)).
r
r
(i), it follows that there is
a path in C thr from
r
(j)) to
r
(i)).
Let ae be a permutation on I . We define the orbit relation of ae to be the reflexive,
transitive closure of the binary relation Ig. Obviously, the orbit
relation of ae is an equivalence relation; we define the orbit partition of ae to be the
partition induced by the orbit relation of ae. Now Proposition 1 can be reformulated
as: If e is an edge in the scc of r, then the orbit partition of - e is smaller than or
equal to (i.e. a subset of) - r . The following stronger result characterizes - r .
Theorem 2 - r is the join of all orbit partitions of - e , where e ranges over the
edges of the strongly connected component of r, i.e. it is the smallest equivalence
relation containing the orbit relation - e , for each edge e in the scc containing r.
Proof: We need to show that i - r j implies that there are processes
and edges e in the scc of r such that - ek (i k+1
and This would indicate that (i k ; i k+1 ) is in the orbit relation of - ek ;
hence, (i; j) is in the smallest equivalence relation containing the orbit relations of
all the edges in the scc.
The relationship
r
r
(j). Therefore,
r
(i)) and
r
(j)) are in the same scc of the threaded graph. Let
r
r
(j)) be a path that connects them. Take i k to
be - rk (l k ). Certainly,
be the permutation labeling the edge e k
(note that - 0
is different from - ek ). Now we have - ek (i k+1
(i
(i k+1 ), we get - ek (i k+1
(l
Substituting l
(l (from the definition of the threaded graph), we get
This completes the proof.
In our illustrating example in Figure 4 we can compute - e for the 4 edges in the
component g. e
\Gamma!
\Gamma! u 3 are tree edges so
\Gamma! u 1 we find that -
In a similar way, we compute for e
\Gamma! u 2 that
u2
. The orbit partition of -
while that of - e4 is 1 02 . The join of these partitions is 1 02 that coincides
with
The next theorem follows immediately from Definition 2 and is a necessary and
sufficient condition for checking if a class of - r is weakly fair. Let C be the scc of
r in B.
Theorem 3 A class K of the partition - r is weakly fair if and only if there is an
(i) is disabled in u or it is executed (that is,
there is an edge
\Gamma! v in C).
We have gathered together all the necessary tools to present the on-the-fly algorithm
4.2. The Algorithm
Our algorithm is a modification of the strongly connected component computation
using depth first search presented e.g. in [2].
For each vertex a) of the product graph B, we maintain the following
information.
ffl u:dfnum is a unique id (or depth first number) of the node, used for the strongly
connected component computation.q
ffl u:lowlink is the id of a reachable node lower than u itself.
ffl u:onstack is a flag indicating that u is still on stack.
ffl u:perm is the vector - u as defined in the previous subsection.
ffl u:partition is an approximation of - u .
ffl u:status is a vector of flags that indicate which partition classes are known to
be weakly fair.
ffl u:final is a flag that indicates if u is in an scc that contains a final automaton
state. This information is propagated down on the depth first tree.
The variables u:dfnum, u:lowlink and u:onstack are maintained as in the algorithm
given in [2]. u:perm is set when u is created, while u:partition, u:status
and u:final are updated every time an edge to a successor state of u is explored.
On-tye-fly Model Checking
M1. Set the depth-first-counter to zero.
M2. Set (the initial state of B).
Set u:perm to be the identity permutation.
Conduct DF-Search(u).
M3. Exit with a No answer.
DF-Search(u) (Note that
1. Push u on to the stack, set u:onstack.
Set u:dfnum and u:lowlink to the depth-first-counter.
Increase the depth-first-counter.
2. Initialize u:partition to be the identity partition.
3. Initialize u:status with the information on disabled processes stored in the
AQS state s.
u:final if a is a final automaton state.
4. (Idle command. Later modification will use it.)
5. For each AQS edge
\Gamma! t do
7. For each automaton transition a!a 0 that is enabled in s do
8.
9. If v is already constructed and v:onstack is set then do
10. Compute the join of u:partition and the orbit partition of - e and
store it in u:partition.
Update u:status using that process i was executed.
Set u:lowlink to the minimum of u:lowlink and v:lowlink.
11. If v is not constructed yet then do
12.
13. Conduct DF-Search(v).
14. If v:onstack is still set then do
15. Compute the join of u:partition and v:partition and store it
in u:partition.
Combine v:status to u:status.
Update u:status using that process i was executed.
Set u:lowlink to the minimum of u:lowlink and v:lowlink.
u:final if v:final is set.
16. If all the partition classes are weakly fair (use u:status) and u:final
is set then exit with Yes answer.
17. If u:dfnum = u:lowlink then do
18. Pop all elements above u (inclusive) from the stack and mark the popped
vertices off-stack.
In command M2, we construct the initial state of B and invoke DF-search on this
vertex. In DF-search, the algorithm may exit with a Yes answer if a fair and final
scc is discovered. If none of the recursively called invocations of DF-search exit
with a Yes answer, the algorithm outputs a No answer and exits in command M3.
DF-search works as follows. Commands 1-3 initialize variables appropriately.
The two "for" loops in commands 5 and 7 generate the successors of the B state u.
In command 6, we invoke the routine FindEquiv to find the equivalent representative
of (t; - \Gamma1 (l)) in S aqsi . The returned (l 0 ; ae) has the property that (t; - \Gamma1 (l))
and (t; l 0 ) are equivalent under the permutation ae and the later belongs to S aqsi .
This equivalence test becomes very easy if we store the state symmetry partition
of the underlying M state t. For definition and details consult Subsect. 4.4 below.
(It is to be noted that we need this equivalence checking since we are constructing
B to be the product of M \Theta I and the automaton A, and we are doing this using
M and A. However, if we want to construct B to be M \Theta I \Theta A, as in [10], then we
do not need this equivalence checking, and in this case we may have more states in
the resulting B. For this, command 6 needs to be changed to set l 0 to - \Gamma1 (l) and
to set ae to the identity permutation.)
In command 9, we check if u ! v is a non-tree edge and u, v are in the same scc;
this is accomplished by testing that v has already been constructed (i.e. visited)
and v is still on stack. If the test is passed, the orbit partition of - e is joined with
u:partition and the result is stored in u:partition. Commands 12 through 15 are
executed if the edge is a tree edge, i.e., v is constructed (and hence visited) for
the first time. In command 12, v:perm is set; in command 13, DF-search is invoked
on v. If v and u are in the same scc (indicated by the condition in command 14)
then the partitions are joined, u:status is updated and other updates are carried
out. After processing the edge u ! v, in command 16, we check if the partially
explored scc containing u is weakly fair and has a final state; if so the algorithm
exits with a Yes answer indicating a fair computation accepted by the automaton
A is found. In command 18, after detecting the scc, we pop all the states of the scc
from the stack.
Theorem 4 The algorithm described above outputs Yes if and only if the original
program has a weakly fair computation that is accepted by A.
Proof: The proof relies on the theory developed in Subsect. 4.1 and on the correctness
of the strongly connected component algorithm of [2].
Suppose that the algorithm halts with a Yes answer. At termination the stack
contains a strongly connected subgraph of the product graph. That subgraph is
weakly fair with respect to all processes because u:partition's classes are all weakly
fair and u:partition is an approximation of (it is smaller than) - u yielding that
- u is weakly fair itself. This subgraph defines a fair run of the program that is
accepted by the automaton.
If the program terminates with a No answer then it explored the entire B graph
and found that none of the strongly connected components are satisfactory (they either
lack a final automaton state or are not fair with respect to one of the processes).
Hence, the original program had no fair run that is accepted by the automaton.
To analyze to complexity of the algorithm, we use the following notation. If K
is a graph or an automaton, then jKj denotes the number of nodes and E(K) the
number of edges or transitions. Execution of commands M1, M2 and M3 together
takes O(jI time. Commands 1-4, 12-15, 17 and are executed once for each
node. The number of B nodes is at most jM \Theta I j \Delta jAj and a single execution
of the above listed commands takes O(jI time. Thus these commands contribute
O(jM \Theta I j \Delta jAj \Delta jI j) to the over all complexity.
Now consider the execution of the commands 8-11 and 16. Every time these
commands are executed, the triple (e; l; a ! a 0 ) has a different value. Hence these
commands are executed no more than E(M) times. Each execution
of commands 8 and 16 takes O(jI time. We have implemented an algorithm for
joining the two partitions mentioned in command 10; this algorithm uses graph
data structures and has complexity O(jI j). Commands 9 and 11 require checking if
the node v has already been constructed. In our implementation, with each M state
s, we maintain a linked list of all B states whose first component is s; obviously, the
length of this list is at most jI j \Delta jAj and searching this list takes O(jI j \Delta jAj). Thus, we
see that execution of commands 8-11 and
to the over all complexity.
Finally consider command 6. Each time it is executed, the triple (e; l; a) has a
different value. Hence, the number of times it is executed is bounded by E(M) \Delta jI j \Delta
jAj. Thus command 6 contributes O(E(M ) to the over all complexity,
where x denotes the complexity of a single execution of command 6.
From the above analysis, we see that the over all complexity of the algorithm is
Note that, in the most general case, checking for state symmetry in command
6 can have exponential complexity, and hence the value of x can be exponential.
However, in our implementation we only checked for restricted forms of symmetries,
namely for those symmetries that swap two processes, and we also used the state
symmetry partitions generated during the construction of M (see next subsections).
This implementation has complexity O(jI j). Hence, for this implementation, the
over all complexity of the algorithm is O(E(M) \Delta jI
It is to be noted that if we do not invoke the equivalence check in command 6, as
explained earlier, then we will be constructing B as M \Theta I \Theta A, and exploring it.
In this case the over all complexity will also be O(E(M
4.3. State Symmetry in B, Partition Initialization, Parallel Edges
This subsection is devoted to showing that the equivalence relation - u (defined
in Subsect. 4.1, computed in u:partition) can be computed more efficiently than
presented in the basic algorithm. Improvements can be achieved by sophisticated
initialization of - u and by considering only a portion of the edges in command 5.
Let a) be a vertex of the product graph B. Processes i and j are called
u-equivalent, denoted by i
- u j, if there is a permutation ae 2 G such that
and
- u is called the local state symmetry partition at u. Intuitively,
shows that processes i and j are interchangeable in state u. Let u -;l
be an edge of B. Then u aeffi-;ae(l)
\Gamma! v is also an edge yielding that (v; - \Gamma1 (i)) is a
successor of both nodes (u; i) and (u; j) in the threaded graph B thr
. From lemma
4, it follows that (u; i) and (u; are in the same scc of B thr
. Hence, i
- u j. This
proves the next lemma.
Lemma 3 The partition
- u is smaller than
- u for every state u of the product
graph B.
This fact allows an improvement to the algorithm. First we need to project
- u to the common referential base. We define
(j).
Command 2 in DF-Search can be changed to
Initialize u:partition to be - u .
Now we describe how state symmetry can be used to remove parallel edges. Let
\Gamma! v and e
\Gamma! v be edges in B. We say that e and e 0 are parallel if
there is a permutation ae 2 G such that Surely, being
parallel is an equivalence relation on the edges. Let R r
pr be a set of representative
edges that contains at least one edge from each parallel class. When the partitions
are initialized as presented in command 2 0 , the orbit partition of - e 0 does not give
any new information after the orbit partition of - e has been considered. It is
reflected in the next lemma.
Lemma 4 If r is in an scc of B then - r is the smallest partition that contains - v
(the initial value of v:partition) for every v in the scc of r as well as the orbit
partition of - e for every edge e 2 R r
pr .
Proof: Let
\Gamma! v be an edge of the scc of r whose representative is
\Gamma! v in R r
pr . Suppose - e We show that (i; j) is contained in
the join of the orbit partition of - e 0 and - u . Using the definition of - e , we have
is the identity
permutation. Let
(i). Now - e 0
The later implies ae
(j). Hence - \Gamma1
(l)
giving l - u j.
Summing up, - e implies that there is an l such that (i; l) is in the orbit
partition of e 0 while (l; j) is in - u . Therefore, the orbit partition of e is contained
in the join of - u and the orbit partition of e 0 .
This proves that the smallest partition that contains - v for every v in the scc of
r as well as the orbit partition of - e for every edge e 2 R r
pr actually contains the
orbit partition of all edges in the scc of r. Using Theorem 2 we conclude that it
contains - r as well. The other direction follows from Proposition 1 and Lemma 3.
These ideas can be applied as follows. From each class of - u pick a representative
process and call it the leader of that class. Put R r
l is a leaderg. Since every edge is parallel to one that was caused by a leader pro-
cess, this R r
pr is a satisfactory set of representative edges. We introduce the new
vector u:leader of flags. The next improvement in the algorithm is the introduction
of command 4 and the modification of command 5.
4. Initialize u:leader.
5'. For each AQS edge
\Gamma! v if u:leader[i] is set do
4.4. State Symmetry in M
In this subsection, we show how state symmetry can also be used to reduce the
number of edges of M that are generated and stored. Let - be a state symmetry
of s, i.e.,
\Gamma! t is an edge of M , then s -ffi-(j)
\Gamma! t is also an
edge of M . This simple observation shows that we need not store both s -;j
\Gamma! t and
\Gamma! t provided that - can be efficiently computed for s.
The above idea is employed by first introducing, for each AQS state s, an equivalence
(called state equivalence) relation
among process indices defined as follows.
there is a - 2 G with
Note that in Subsect. 4.3 we introduced local state symmetry partition for B states;
the local state symmetry partition of a B state a) is denoted by
- u . (Note
that the subscript distinguishes the two notations.) We recall that i
there
is a - 2 G with j. Observe that i
Therefore, the local state symmetry partition for a B state is usually smaller than
the state equivalence relation of the underlying M state. This is caused by the fact
that a state symmetry permutation of a B state fixes not only the underlying M
state but the tracked processes as well. Nevertheless, having
- s in hand,
- u can
be easily computed.
Unfortunately, the problem of computing
- s can be a difficult task since it is
equivalent to the graph isomorphism problem. (With any given graph H , we can
associate a program P and a state s of P in a straightforward way. P has as many
processes as many nodes H has; for each pair of processes v; w, P has a variable
a[v; w] indexed by v and w; a[v; w] takes value 1 if v ! w is an edge of H , otherwise
it is 0. Now v
exactly when H has an automorphism that maps node v to
That later problem is equivalent to the graph isomorphism problem.) In many
important special cases the symmetry detection can be performed efficiently. In
general, however, only approximating solutions are available.
Let s -;j
\Gamma! t be an edge of M ,
\Gamma! t is an edge as well.
This simple observation shows that we need not store both s -;j
\Gamma! t and s -ffi-(j)
provided that - can be efficiently computed for s.
We are ready to present the last improvement to our algorithm. In the construction
of M (not shown in the algorithm) we do the following modifications. When
a new node s is created, we compute
- s . A vector s:repr is defined such that, for
every index j, it points to a representative of the
s -class of j. By the construction
of the edges of M , we store only those edges that are caused by a representative
process. (So s -;j
\Gamma! t is stored if
In our original algorithm commands 5', 12 should be changed to 5'' and 12' below,
respectively.
5 00 . For each stored AQS edge
\Gamma! t and each process i with
and u:leader[i] is set, compute some - 2 G with
then do
As it has been pointed out earlier, in general, computing
- s is computationally
hard. However, we have implemented a method where we only look for state sym-
metries, i.e. permutations, which only interchange two process indices; computing
all such symmetries and the corresponding equivalence relation
- s can be done ef-
ficiently. The same approach is employed in computing the state symmetries in B.
Also note that in step 5" of the algorithm, it is enough if we find one permutation
- satisfying the given property; we do not have to compute all such permutations.
Since, for the case of state symmetry, we are restricting the class of permutations to
be those that only interchange two process indices, step 5" can also be implemented
efficiently.
Concluding this section, we illustrate the concept of state symmetry and redundant
edges by showing M of our simplified Resource Controller after deleting all
redundant edges.
OEAE
iii-
rii
OEAE
oe id; 1id\Gamma
\Gamma\Psi
\Gamma\Psi
\Gamma\Psi
Figure
5. The AQS M without redundant edges
Consider Figure 5. The top row in each circle shows the local states of the three
processes while the bottom row lists the representative processes. The compressed
M has 16 edges while the original had 27.
5. Implementation
We have developed a prototype of the on-the-fly model checker implementing the
above presented algorithm. We have used efficient approximation techniques to
check equivalence of two states when generating the AQS and also in the main
algorithm where we had to check for equivalence of B states. For the case of
complete symmetries, as in the Resource Controller and Readers/Writers examples,
this approximation algorithm will indicate two states to be equivalent whenever
they are equivalent. For other types of symmetry, these approximation methods
may sometime indicate two states to be inequivalent although they are equivalent.
In such cases, we may not get maximum possible reduction in the size of the state
space; however, our algorithm will still correctly indicate if the concurrent system
satisfies the correctness specification or not.
We used the implemented system to check for the correctness of the Resource
Controller example, the Readers Writers example, and the Ethernet Protocol with
various number of users.
We contrasted our new system with the old model checker that implements the
results presented in [10] on the Resource Controller example. We checked many
properties including the liveness property that every user process that requests a
resource will eventually access the resource, and the mutual exclusion property.
Dramatic improvement was detected in all performance measures as indicated in
Table
below. The product graphs constructed by the old and new model
checker are referred to as B 0 and B respectively. Each statistic is given as a=b
where a and b are the numbers corresponding to the old and new model checker
respectively.
Table
1. Statistics for checking a liveness property
Eventually Access 10 50 100
AQS states 38 / 38 198 / 198 398 / 398
AQS transitions 235 / 107 6175 / 1567 24850 / 5642
Explored
Total memory used (kbyte) 31 / 13 1219 / 216 6878 / 830
Total CPU time used (sec) 0 / 0 37 / 6 481 / 42
Table
2. Statistics for checking a safety property
Mutual
AQS states 38 / 38 198 / 198 398 / 398
AQS transitions 235 / 107 6175 / 1567 24850 / 5642
Explored
Total memory used (kbyte)
Total CPU time used (sec) 0 / 0
The liveness property we checked is not satisfied by the Resource Controller.
Both model checkers found a fair incorrect computation. From the table, we see
that the number of AQS states are the same, while the number of AQS transitions
is much smaller in the new model checker due to the use of state symmetry. The
number of product states explored in the on-the-fly system is much smaller since
it terminated early. On the other hand the original model checker constructed the
before checking for an incorrect fair computation.
For the mutual exclusion property, both model checkers indicated that the Resource
Controller satisfies that property. In this case, early termination does not
come into effect. Furthermore, since we do not track any process (mutual exclusion
is a global property), the number of states explored in B 0 and B are the same.
However, the number of transitions is much smaller in M as well as in B due to
the effect of state symmetry. The over all CPU time and the memory usage are
substantially smaller for the new model checker.
6. Conclusions
In this paper, we have presented an on-the-fly model checking system that exploits
symmetry (between states as well as inside a state) and checks for correctness under
fairness.
Symmetry based reduction has been shown to be a powerful tool for reducing the
size of the state space in a number of contexts. For example, such techniques have
been employed in the Petri-net community [14, 15] to reduce the size of the state
space explored. Such techniques have also been used in protocol verification [1, 16]
and in hardware verification [13] and in temporal logic model checking [5, 9, 10].
There have been on-the-fly model checking techniques [3, 11, 12, 17] that employ
traditional state enumeration methods. Some of them [11, 12, 17] also use other
types of state reduction techniques. To the best of our knowledge, ours is the first
approach that performs on-the-fly model checking under fairness for the full range
of temporal properties and that exploits symmetry.
As part of future work, we plan to explore techniques to automatically detect
symmetries and integrate these techniques with the model checker. Also, algorithms
for checking equivalence of global states under other types of symmetry need to be
further explored.
--R
"A Calculus for Protocol Specification and Validation"
"The Design and Analysis of Computer Algorithms"
"Efficient On-the-Fly Modelchecking for CTL"
"Automatic Verification of Finite State Concurrent Programs Using Temporal Logic: A Practical Approach"
"Exploiting Symmetry in Temporal Logic Model Check- ing"
"Analyzing Concurrent Systems using the Concurrency Workbench, Functional Programming, Concurrency, Simulation, and Automated Reasoning"
"Introduction to Algorithms"
"Generation of Reduced Models for checking fragments of CTL"
"Symmetry and Model Checking"
"Utilizing Symmetry when Model Checking under Fairness Assumptions: An Automata-theoretic Approach"
"Partial-Order Methods for the Verification of Concurrent Systems"
"The State of SPIN"
"Better Verification through Symmetry"
"Colored Petri Nets: Basic Concepts, Analysis Methods, and Practical Use"
"High-level Petri Nets: Theory and Application"
"Testing Containment of omega-regular Languages"
"Computer Aided Verification of Coordinated Processes: The Automata Theoretic Approach"
--TR
--CTR
A. Prasad Sistla , Patrice Godefroid, Symmetry and reduced symmetry in model checking, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.4, p.702-734, July 2004
Sharon Barner , Orna Grumberg, Combining symmetry reduction and under-approximation for symbolic model checking, Formal Methods in System Design, v.27 n.1/2, p.29-66, September 2005
Alice Miller , Alastair Donaldson , Muffy Calder, Symmetry in temporal logic model checking, ACM Computing Surveys (CSUR), v.38 n.3, p.8-es, 2006 | model checking;automata;symmetry reduction;state explosion;verification |
339896 | Galerkin Projection Methods for Solving Multiple Linear Systems. | In this paper, we consider using conjugate gradient (CG) methods for solving multiple linear systems $A^{(i)} where the coefficient matrices $A^{(i)}$ and the right-hand sides $b^{(i)}$ are different in general.\ In particular, we focus on the seed projection method which generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems onto the generated Krylov subspace to get the approximate solutions.\ The whole process is repeated until all the systems are solved.\ Most papers in the literature [T.\ F.\ Chan and W.\ L.\ Wan, {\it SIAM J.\ Sci.\ Comput.}, Peterson, and R.\ Mittra, {\it IEEE Trans.\ Antennas and Propagation}, 37 (1989), pp. 1490--1493] considered only the case where the coefficient matrices $A^{(i)}$ are the same but the right-hand sides are different.\ We extend and analyze the method to solve multiple linear systems with varying coefficient matrices and right-hand sides. A theoretical error bound is given for the approximation obtained from a projection process onto a Krylov subspace generated from solving a previous linear system. Finally, numerical results for multiple linear systems arising from image restorations and recursive least squares computations are reported to illustrate the effectiveness of the method. | Introduction
We want to solve, iteratively using Krylov subspace methods, the following linear systems:
A (i) x
where A (i) are real symmetric positive definite matrices of order n, and in general A (i) 6= A (j)
and b (i) 6= b (j) for i 6= j. Unlike for direct methods, if the coefficient matrices and the right-hand
sides are arbitrary, there is nearly no hope to solve them more efficiently than as s completely
un-related systems. Fortunately, in many practical applications, the coefficient matrices and the
right-hand sides are not arbitrary, and often there is information sharable among the coefficient
matrices and the right-hand sides. Such a situation occurs, for instance, in recursive least squares
computations [20], wave scattering problem [14, 4, 9], numerical methods for integral equations
[14] and image restorations [13]. In this paper, our aim is to propose a methodology to solve
these "related" multiple linear systems efficiently.
In [24], Smith et al. proposed and considered using a seed method for solving linear systems
of the same coefficient matrix but different right-hand sides, i.e.,
In the seed method, we select one seed system and solve it by the conjugate gradient method.
Then we perform a Galerkin projection of the residuals onto the Krylov subspace generated
by the seed system to obtain approximate solutions for the unsolved ones. The approximate
solutions are then refined by the conjugate gradient method again. In [24], a very effective
implementation of the Galerkin projection method was developed which uses direction vectors
generated in the conjugate gradient process to perform the projection. In [6], Chan and Wan
observed that the seed method has several nice properties. For instance, the conjugate gradient
method when applied to the successive seed system converges faster than the usual CG process.
Another observation is that if the right-hand sides are closely related, the method automatically
exploits this fact and usually only takes a few restarts to solve all the systems. In [6], a theory
was developed to explain these phenomena. We remark that the seed method can be viewed
as a special implementation of the Galerkin projection method which had been considered and
analyzed earlier for solving linear systems with multiple right-hand sides, see for instance, Parlett
[19], Saad [21], van der Vorst [26], Padrakakis et al. [18], Simoncini and Gallopoulos [22, 23]. A
very different approach based on the Lanczos method with multiple starting vectors have been
recently proposed by Freund and Malhotra [9].
In this paper, we extend the seed method to solve the multiple linear systems (1.1), with
different coefficient matrices different right-hand sides (b (j) 6= b (k) ). We
analyze the seed method and extend the theoretical results given in [6]. We will see that the
theoretical error bounds for the approximation obtained from a projection process depends on
the projection of the eigenvector components of the error onto a Krylov subspace generated from
the previous seed system and how different the system is from the previous one.
Unlike in [6], in the general case here where the coefficient matrices A (i) can be different, it
is not possible to derive very precise error bounds since the A (i) 's have different eigenvectors in
general. Fortunately, in many applications, even though the A (i) 's are indeed different, they may
be related to each other in a structured way which allows a more precise error analysis. Such is
the case in the two applications that we study in this paper, namely, image restorations and recursive
least squares (RLS) computations. More precisely, for the image restoration application,
the eigenvectors of the coefficient matrices are the same, while for the RLS computations, the co-efficient
matrices differ by rank-1 or rank-2 matrices. Numerical examples on these applications
are given to illustrate the effectiveness of the projection method. We will see from the numerical
results that the eigenvector components of the right-hand sides are effectively reduced after
the projection process and the number of iterations required for convergence decreases when
we employ the projected solution as initial guess. Moreover, other examples involving more
general coefficient matrices (for instance, that do not have the same eigenvectors or differ by a
low rank matrix), are also given to test the performance of the projection method. We observe
similar behaviour in the numerical results as in image restoration and RLS computations. These
numerical results demonstrate that the projection method is effective.
The paper is organized as follows. In x2, we first describe and analyze the seed projection
algorithm for general multiple linear systems. In x3, we study multiple linear systems arising
from image restoration and RLS applications. Numerical examples are given in x4 and concluding
remarks are given in x5.
2 Derivation of the Algorithm
Conjugate gradient methods can be seen as iterative solution methods to solve a linear system
of equations by minimizing an associated quadratic functional. For simplicity, we let
be the associated quadratic functional of the linear system A (i) x . The minimizer of f j
is the solution of the linear system A (i) x . The idea of the projection method is that
for each restart, a seed system A (k) x is selected from the unsolved ones which are then
solved by the conjugate gradient method. An approximate solution - x (j) of the non-seed system
A (j) x can be obtained by using search direction p k
i generated from the ith iteration
of the seed system. More precisely, given the ith iterate x j
i of the non-seed system and the
direction vector p k
i , the approximate solution -
x (j) is found by solving the following minimization
problem:
It is easy to check that the minimizer of (2.2) is attained at -
(p k
and r j
After the seed system A (k) x is solved to the desired accuracy, a new seed system is
selected and the whole procedure is repeated. In the following discussion, we call this method
Projection Method I. We note from (2.3) that the matrix-vector multiplication A (j) p k
j is required
for each projection of the non-seed iteration. In general, the cost of the method will be expensive
in the general case where the matrices A (j) and A (k) are different. However, in x3, we will consider
two specific applications where the matrices A (k) and A (j) are structurally related. Therefore,
the matrix-vector products A (j) p k
j can be computed cheaply by using the matrix-vector product
A
j generated from the seed iteration.
In order to reduce the extra cost in Projection Method I in the general case, we propose
using the modified quadratic function ~
~
to compute the approximate solution of the non-seed system. Note that we have used A (k)
instead of A (j) in the above definition. In this case, we determine the next iterate of the non-
seed system by solving the following minimization problem:
min ff
~
The approximate solution -
x (j) of the non-seed system A (j) x is given by
where
(p k
and ~ r j
Now the projection process does not require the matrix-vector product involving the coefficient
matrix A (j) of the non-seed system. Therefore, the method does not increase the dominant cost
(matrix-vector multiplies) of each conjugate gradient iteration. In fact, the extra cost is just
one inner product, two vector additions, two scalar-vector multiplications and one division. We
call this method Projection Method II. Of course, unless A (j) is close to A (k) in some sense, we
do not expect this method to work well because ~
f j is then far from the current f j .
To summarize the above methods, Table 1 lists the algorithms of Projection Methods I
and II. We remark that Krylov subspace methods (for instance conjugate gradient), especially
when combined with preconditioning, are known to be powerful methods for the solution of
linear systems [10]. We can incorporate the preconditioning strategy into the projection method
to speed up its convergence rate. The idea of our approach is to precondition the seed system
A preconditioner C (k) for each restart. Meanwhile, an approximate
solution of the non-seed system A (j) x also obtained from the space of direction
vectors generated by the conjugate gradient iterations of preconditioned seed system. We can
formulate the preconditioned projection method directly produces vectors that approximate the
desired solutions of the non-seed systems. Table 2 lists the preconditioned versions of Projection
Methods I and II.
all the systems are solved
Select the kth system as seed
for iteration
for unsolved systems
if j=k then perform usual CG steps
oe k;k
x k;k
r k;k
else perform Galerkin projection
x k;j
r k;j
A (j) p k;k
end for
end for
end for
all the systems are solved
Select the kth system as seed
for iteration
for unsolved systems
if j=k then perform usual CG steps
oe k;k
x k;k
r k;k
else perform Galerkin projection
x k;j
r k;j
A
end for
end for
end for
Table
1: Projection Methods I (left) and II (right). The kth system is the seed for the
restart. The first and the second superscripts is used to denote the kth restart and the jth
system. The subscripts is used to denote the ith step of the CG method.
all the systems are solved
Select the kth system as seed
for iteration
for unsolved systems
if j=k then perform usual CG steps
oe k;k
x k;k
+oe k;k
r k;k
z k;k
preconditioning
else perform Galerkin projection
x k;j
r k;j
end for
end for
end for
all the systems are solved
Select the kth system as seed
for iteration
for unsolved systems
if j=k then perform usual CG steps
oe k;k
x k;k
+oe k;k
r k;k
z k;k
preconditioning
else perform Galerkin projection
x k;j
r k;j
end for
end for
end for
Table
2: Preconditioned Projection Methods I (left) and II (right)
We emphasize that in [6, 19, 21, 23, 24], the authors only considered using the projection
method for solving linear systems with the same coefficient matrix but different right-hand sides.
In this paper, we use Projection Methods I and II to solve linear systems with different coefficient
matrices and right-hand sides . An important question regarding the approximation obtained
from the above process is its accuracy. For Projection Method I, it is not easy to derive error
bounds since the direction vectors generated for the seed system A (k) x are only A (k) -
orthogonal but are not A (j) -orthogonal in general. In the following discussion, we only analyze
Projection Method II. However, the numerical results in x4 shows that Projection method I is
very efficient for some applications and is generally faster convergent than Projection Method
II.
2.1 Analysis of Projection Method II
For Projection Method II, we have the following Lemma in exact arithmetic.
Assume that a seed system A (k) x has been selected. Using Projection
Method II, the approximate solution of the non-seed system A (j) x (j) at the ith iteration is
given by
x k;j
where x k;j
' is 'th iterate of the non-seed system, V k
i is the Lanczos vectors generated by i steps
of the Lanczos algorithm if the seed system A (k) x solved by the Lanczos algorithm,
Proof: Let the columns of V k
be the orthonormal vectors of the i-dimensional
Krylov subspace generated by i steps of the Lanczos method. Then we have the following
well-known three-term recurrence
where e i is the ith column of the identity matrix and fi k
i+1 is a scalar. From (2.4) (or see [24]), the
approximate solution x k;j
i of the non-seed system is computed in the subspace generated by the
direction vectors fp k;k
generated from the seed iteration. However, this subspace generated by
the direction vectors is exactly the subspace spanned by the columns of V k
Therefore,
we have
x k;j
Moreover, it is easy to check from (2.4) and (2.5) that
(p k;k
It follows that the solution x k;j
i can be obtained by the Galerkin projection onto the Krylov
subspace K (k) generated by the seed system. Equivalently, x k;j
i can be determined by solving
the following problem:
Noting that the solution is
0 ). the result follows.
To analyze the error bound of Projection Method II, without loss of generality, consider only
two symmetric positive definite n-by-n linear systems:
A (1) x
The eigenvalues and normalized eigenvectors of A (i) are denoted by - (i)
k and q (i)
k respectively and
2. The theorem below gives error bounds for Projection
Method II for solving multiple linear systems with different coefficient matrices and right-hand
sides.
Theorem 1 Suppose the first linear system A (1) x is solved to the desired accuracy in
steps. Let x 1;2
0 be the solution of the second system A (2) x obtained from the
projection onto Km generated by the first system, with zero vector as the initial guess of the
second system (x 0;2
the eigen-decomposition of x
0 be expressed as
Then the eigenvector components c k can be bounded by:
where
6 (q (2)
Here Vm is the orthonormal vectors of Km , P ?
A (1) is the A (1) -orthogonal
projection onto Km and
m A (1) Vm is the matrix representation of the projection of A (1)
onto Km .
Proof: By (2.6), we get x 1;2
x
Since Vm is the orthogonal vectors of Km and
we have kVm
It follows that
6 (q (2)
6 (q (2)
Theorem 1 basically states that the size of the eigenvector component c k is bounded by E k
and F . If the Krylov subspace Km generated by the seed system contains the eigenvectors q (2)
well, then the projection process will kill off the eigenvector components of the initial error of
the non-seed system, i.e., E k is very small. On the other hand, F depends essentially on how
different the system A (2) x (2) is from the previous one A (1) x In particular, when
is small, then F is also small.
We remark that when A A (2) and b (1) 6= b (2) , the term F becomes zero, and as q (1)
the
6 (q (1)
Km )k. It is well-known that the Krylov subspace Km
generated by the seed system contains the eigenvectors q (1)
k well. In particular, Chan and Wan
[6] have the following result about the estimate of the bound sin
6 (q (1)
6
\Gamma- (1)
(- (1)
is the Chebyshev polynomial of degree j. Then
sin
6 (q (1)
If we assume that the eigenvalues of A (1) are distinct, then Tm\Gammak (1+2- k ) grows exponentially
as m increases and therefore the magnitude sin
6 (q (1)
very small for sufficiently large m.
It implies that the magnitude E k is very small when m is sufficiently large. Unfortunately, we
cannot have this result in the general case since q (1)
k , except in some special cases that
will be discussed in the next section.
3 Applications of Galerkin Projection Methods
In this section, we consider using the Galerkin projection method for solving multiple linear
systems arising in two particular applications from image restorations and recursive least squares
computations. In these applications, the coefficient matrices differ by a parameterized identity
matrix or a low rank matrix. We note from Theorem 1 that the theoretical error bound of the
projection method depends on E k and F . In general, it is not easy to refine the error bound E k
and F . However, in these cases, the error bound E k and F can be further investigated.
3.1 Tikhonov Regularization in Image Restorations
Image restoration refers to the removal or reduction of degradations (or blur) in an image using
a priori knowledge about the degradation phenomena; see for instance [13]. When the quality
of the images is degraded by blurring and noise, important information remains hidden and
cannot be directly interpreted without numerical processing. In matrix-vector notation, the
linear algebraic form of the image restoration problem for an n-by-n pixel image is given as
follows:
where b, x, and j are n 2 -vectors and A is an n 2 -by-n 2 matrix. Given the observed image b, the
matrix A which represents the degradation, and possibly, the statistics of the noise vector j, the
problem is to compute an approximation to the original signal x.
Because of the ill-conditioning of A, naively solving will lead to extreme instability
with respect to perturbations in b, see [13]. The method of regularization can be used to
achieve stability for these problems [1, 3]. In the classical Tikhonov regularization [12], stability
is attained by introducing a stabilizing operator D (called a regularization operator), which
restricts the set of admissible solutions. Since this causes the regularized solution to be biased,
a scalar -, called a regularization parameter, is introduced to control the degree of bias. More
specifically, the regularized solution is computed as the solution to
min
b#
A
-D
or min
The term kDx(-)k 2
2 is added in order to regularize the solution. Choosing D as a kth order
difference operator matrix forces the solution to have a small kth order derivative. When
the rectangular matrix has full column rank, one can find the solution by solving the normal
equations
The regularization parameter - controls the degree of smoothness (i.e., degree of bias) of
the solution, and is usually small. Choosing - is not a trivial problem. In some cases a priori
information about the signal and the degree of perturbations in b can be used to choose - [1],
or generalized cross-validation techniques may also be used, e.g., [3]. If no a priori information
is known, then it may be necessary to solve (3.10) for several values of -. For example, in the
L-curve method discussed in [7], choosing the parameter - requires solving the linear systems
with different values of -. This gives rise to multiple linear systems which can be solved by our
proposed projection methods.
In some applications [13, 5], the regularization operator D can be chosen to be the identity
matrix. Consider for simplicity two linear systems:
2:
In this case, we can employ Projection Method I to solve these multiple linear systems as the
matrix-vector product (- 2 I in the non-seed iteration can be computed cheaply by
adding (- 1 I +A T A)p generated from the seed iteration and (- together. Moreover, we
can further refine the error bound of Projection Method II in Theorem 1. Now assume that m
steps of the conjugate gradient algorithm have been performed to solve the first system. We
note in this case that the eigenvectors of the first and the second linear systems are the same,
i.e., q (1)
k . Therefore, we can bound sin
6 (q (1)
using Lemma 2. We shall prove that if
the Krylov subspace of the first linear system contains the extreme eigenvectors well, the bound
for the convergence rate is effectively the classical conjugate gradient bound but with a reduced
condition number.
Theorem 2 Let x 1;2
0 be the solution of the second system obtained from the projection onto Km
generated by the first system. The bound for the A (2) -norm of the error vector after i steps of
the conjugate gradient process is given by
A (2) - 4kx
x 1;2
A (2)
x 1;2
i is ith iterate of the CG process for A (2) x with the projection of x
span fq (1)
'+1 is the reduced condition number of
A (2) and
- (2)
Proof: We first expand the eigen-components of x
It is well-known [10] that there exists a polynomial -
of degree at most i and constant term
1 such that
x 1;2
x 1;2
By using properties of the conjugate gradient iteration given in [10], we have
A
A (2)
A (2)
A (2)
x 1;2
Now the term kx
A (2) can be bounded by the classical CG error estimate,
x 1;2
A (2) - 4kx
x 1;2
A (2)
Noting that
using
Theorem 1 and Lemma 2, the result follows by substitution (2.7) into (3.12).
We see that the perturbation term ffi contains two parts. One depends on the ratio - 2 =- 1
of the regularization parameters between two linear systems and the other depends on how well
the Krylov subspace of the seed system contains the extreme eigenvectors. We remark that the
regularization parameter - in practice is always greater than 0 in image restoration applications
because of the ill-conditioning of A. In particular, - 1 6= 0. If the ratio - 2 =- 1 is near to 1, then
the magnitude of this term will be near to zero. On the other hand, according to Lemma 2, the
Galerkin projection will kill off the extreme eigenvector components and therefore the quantity
in (3.11) will be also small for k close to 1. Hence the perturbation term ffi becomes
very small and the CG method, when applied to solve the non-seed system, converges faster
than the usual CG process.
3.2 Recursive Least Squares Computations in Signal Processing
Recursive least squares (RLS) computations are used extensively in many signal processing and
control applications; see Alexander [2]. The standard linear least squares problem can be posed
as follows: Given a real p-by-n matrix X with full column rank n (so that X T X is symmetric
positive definite) and a p-vector b, find the n-vector w that solves
min w
In RLS computations, it is required to recalculate w when observations (i.e., equations) are
successively added to, or deleted from, the problem (3.13). For instance, in many applications
information arrives continuously and must be incorporated into the solution w. This is called
updating. It is sometimes important to delete old observations and have their effect removed
from w. This is called downdating and is associated with a sliding data window. Alternatively,
an exponential forgetting factor fi, with instance [2]), may be incorporated
into the updating computations to exponentially decay the effect of the old data over time. The
use of fi is associated with an exponentially-weighted data window.
3.2.1 Rank-1 Updating and Downdating Sliding Window RLS
At the time step t, the data matrix and the desired response vector are given by
d t\Gammap+1
respectively, where p is the length of sliding window (one always assumes that p - n). We solve
the following least squares problem: min w(t) Now we assume that a row
is added and a row is removed at the step t + 1. The right-hand-side
desired response vector modified in a corresponding fashion. One now seeks to solve
the modified least squares problem min w(t+1) for the updated least
squares estimate vector w(t + 1) at the time step t + 1. We note that its normal equations are
given by
Therefore, the coefficient matrices at the time step t and t differ by a rank-2 matrix.
3.2.2 Exponentially-weighted RLS
For the exponentially-weighted case, the data matrix X(t) and desired response vector d(t) at
the time step t are defined [2] recursively by
and
where fi is the forgetting factor, and x T
. The RLS algorithms recursively solve for the least squares estimator w(t) at time t,
with t - n. The least squares estimator at the time t and t can be found by solving the
corresponding least squares problems and their normal equations are given by
and
respectively. We remark that these two coefficient matrices differ by a rank-1 matrix plus a
scaling.
3.2.3 Multiple Linear Systems in RLS computations
We consider multiple linear systems in RLS computations, i.e., we solve the following least
squares problem successively
where s is an arbitrary block size of RLS computations. The implementation of recursive least
squares estimators have been proposed and used [8]. Their algorithms updates the filter coefficients
by minimizing the average least squares error over a set of data samples. For instance,
the least squares estimates can be computed by modifying the Cholesky factor of the normal
equations with O(n 2 ) operations per adaptive filter input [20]. For our approach, we employ the
Galerkin projection method to solve the multiple linear systems arising from sliding window or
exponentially-weighted RLS computations.
For the sliding window RLS computation with rank-1 updating and downdating, by (3.15),
the multiple linear systems are given by
1st system : X(t) T
\Theta
s
s
s
s
For the exponentially-weighted case, by (3.16), the multiple linear systems are given by
1st system : X(t) T
\Theta
s
s
According to (3.18), the consecutive coefficient matrices only differ by a rank-2 matrix in
the sliding data window case. From (3.19), the consecutive coefficient matrices only differ by a
rank-1 matrix and the scaled coefficient matrix in the exponentially-weighted case. In these RLS
computations, Projection Method I can be used to solve these multiple linear systems as the
matrix-vector product in the non-seed iteration can be computed inexpensively. For instance,
the matrix-vector product for the new system can be computed by
is generated from the seed iteration. The extra cost is some inner
products. We remark that for the other linear systems in (3.18) and (3.19), we need more inner
products because the coefficient matrices X(t) T X(t) and differ by a rank-s
or rank-2s matrices.
We analyze below the error bound given by Projection Method II for the case that the
coefficient matrices differ by a rank-1 matrix, i.e.,
A
where r has unit 2-norm and each component is greater than zero. For the exponentially-
weighted case, we note that
By using the eigenvalue-eigenvector decomposition of A (1) , we obtain
A
with
is a diagonal matrix containing eigenvalues - (1)
i of A (1) and
r. It has been shown in [11] that if - (1)
k for all k, then the eigenvalues - (2)
k of A (2)
can be computed by solving the secular equation
[(q (1)
(- (1)
0:
Moreover, the eigenvectors q (2)
k of A (2) can be calculated by the formula:
q (2)
Theorem 3 Suppose the first linear system A (1) x is solved to the desired accuracy in m
CG steps. Then the eigenvector components c k of the second system are bounded by jc
6 (q (1)
and
(q (1)
(- (1)
(q (1)
where fq (1)
i g is the orthonormal eigenvectors of A (1) and Km is the Krylov subspace generated
for the first system.
Proof: We just note from Theorem 1 that jc k j - j(P ?
By using (3.20), Theorem 1 and Lemma 2, we can analyze the term j(P ?
6 (q (1)
Since jfl i;k j and j sin
6 (q (1)
are less than 1, we have
small and large i
6 (q (1)
remaining i
From Lemma 2, for i close to 1 or n,
6 (q (1)
sufficiently small when m is large.
Moreover, we note that if ae ? 0, then
see [10]. Therefore, if the values (q (1)
are about the same magnitude for each eigenvector q (1)
then the maximum value of jfl i;k j is attained at either may expect that the
second term of the inequality (3.21) is small when k is close to 1 or n. By combining these facts,
we can deduce that E k is also small when k is close to 1 or n. On the other hand, if the scalar ae
is small (i.e., the 2-norm of rank-1 matrix is small), then F is also small. To illustrate the result,
we apply Projection Method II to solve A (1) x
b (1) and b (2) are random vectors with unit 2-norm. Figures 1 and 2
show that some of the extreme eigenvector components of b (2) are killed off by the projection
especially when jaej is small. This property suggests that the projection method is useful to solve
multiple linear systems arising from recursive lease squares computations. Numerical examples
will be given in the next section to illustrate the efficiency of the method.
In this section, we provide experimental results of using Projection Methods I and II to solve
multiple linear systems (1.1). All the experiments are performed in MATLAB with machine
. The stopping criterion is: kr k;j
tol is the tolerance we
used. The first and the second examples are Tikhonov regularization in image restoration and
the recursive least squares estimation, exactly as discussed in x3. The coefficient matrices A (i) 's
have the same eigenvectors in the Example 1. In Example 2, the coefficient matrices A (i) 's differ
component number
log
of
the
component
RHS before projection
component number
log
of
the
component
RHS after projection
(a) (b)
component number
log
of
the
component
RHS before projection
component number
log
of
the
component
RHS after projection
(c) (d)
Figure
1: Size distribution of the components of (a) the original right hand side b (2) , (b) b (2)
after Galerkin projection when ae = 1. Size distribution of the components of (c) the original
right hand side b (2) , (d) b (2) after Galerkin projection when
component number
log
of
the
component
RHS before projection
component number
log
of
the
component
RHS after projection
(a) (b)
component number
log
of
the
component
RHS before projection
component number
log
of
the
component
RHS after projection
(c) (d)
Figure
2: Size distribution of the components of (a) the original right hand side b (2) , (b) b (2)
after Galerkin projection when ae = \Gamma1. Size distribution of the components of (c) the original
right hand side b (2) , (d) b (2) after Galerkin projection when
Linear Systems (1) (2) (3) (4) Total
Starting with Projection Method I 36 37 43
Starting with Projection Method II 36 48 55 76 205
Starting with previous solution 36 54 66 87 243
Starting with random initial guess 38
Starting with Projection Method I 9 9 9 11 38
using preconditioner
Starting with Projection Method II 9 9 11
using preconditioner
Starting with previous solution 9 13 16 23 61
using preconditioner
Table
3: (Example 1) Number of matrix-vector multiplies required for convergence of all the
systems. Regularization parameter
by a rank-1 or rank-2 matrices. We will see that the extremal eigenvector components of the
right-hand sides are effectively reduced after the projection process. Moreover, the number of
iterations required for convergence when we employ the projected solution as initial guess is less
than that required in the usual CG process.
Example We consider a 2-dimensional deconvolution problem arising in
ground-based atmospheric imaging and try to remove the blurring in an image (see Figure
3(a)) resulting from the effects of atmospheric turbulence. The problem consists of a 256-by-
256 image of an ocean reconnaissance satellite observed by a simulated ground-based imaging
system together with a 256-by-256 image of a guide star (Figure 3(b)) observed under similar
circumstances. The data are provided by the Phillips Air Force Laboratory at Kirkland AFB,
NM through Prof. Bob Plemmons at Wake Forest University. We restore the image using the
identity matrix as the regularization operator suggested in [5] and therefore solve the linear
systems (3.10) with different regularization parameters -. We also test the effectiveness of the
preconditioned projection method. The preconditioner we employed here is the block-circulant-
circulant-block matrix proposed in [5].
Table
3 shows the number of matrix-vector multiplies required for the convergence of all
the systems. Using the projection method, we save on number of matrix-vector multiplies
in the iterative process with or without preconditioning. From Table 3, we also see that the
performance of Projection Method I is better than that of Projection Method II. For comparison,
we present the restorations of the images when the regularization parameters are 0.072, 0.036,
and 0.009 in
Figure
3. We see that when the value of - is large, the restored image is very
smooth, while the value of - is small, the noise is amplified in the restored image. By solving
these multiple linear systems successively by projection method, we can select Figure 3(e) that
presents the restored image better than the others.
(a) (b)
(c) (d)
Figure
3: (Example 1) Observed Image (a), guide star image (b), restored images using regularization
parameter
Linear Systems (1) (2) (3) (4) (5) Total
Starting with Projection Method I 45 31 28 25 24 153
Starting with Projection Method II 45 37
Starting with previous solution 45 43 44 42 40 214
(a)
Linear Systems (1) (2) (3) (4) (5) Total
Starting with Projection Method I 68 51 45 36
Starting with Projection Method II 68 55
Starting with previous solution 68 61 59 56 54 308
(b)
Table
4: (Example 2) Number of matrix-vector multiplies required for convergence of all the
systems. (a) Exponentially-weighted RLS computations and (b) Sliding window RLS computation
Example In this example, we test the performance of Projection Methods
I and II in the block (sliding window and exponentially-weighted) RLS computations.
We illustrate the convergence rate of the method by using the adaptive Finite Impulse Response
system identification model, see [15]. The second order autoregressive process
is a white noise process with variance being 1, is used to
construct the data matrix X(t) in x3.2. The reference (unknown) system w(t) is an n-th order
FIR filter. The Gaussian white noise measurement error with variance 0.025 is added into the
desired response d(t) in x3.2. In the tests, the forgetting factor fi is 0.99 and the order n of filter
is 100.
In the case of the exponentially-weighted RLS computations, the consecutive systems differ
by a rank-1 positive definite matrix, whereas in the case of the sliding window computations, the
consecutive systems differ by the sum of a rank-1 positive definite matrix and a rank-1 negative
definite matrix. Table 4 lists the number of matrix-vector multiplies required for the convergence
of all the systems arising from exponentially-weighted and sliding window RLS computations.
We observe that the performance of Projection Method I is better than that of Projection Method
II. The projection method requires less matrix-vector multiplies than that using the previous
solution as an initial guess. We note from Figures 4 and 5 that the eigenvector components of
b (2) are effectively reduced after projection in both cases of exponentially-weighted and sliding
window RLS computations. We see that the decreases of eigenvector components when using
Projection Method I are indeed greater than those when using Projection Method II.
In the next three examples, we consider more general coefficient matrices, i.e., the consecutive
linear systems do not differ by the scaled identity matrix and rank-1 or rank-2 matrices. In these
examples, the matrix-vector products for the non-seed iteration may not be computed cheaply,
component number
log
of
the
component
RHS before Projection
component number
log
of
the
component
RHS using Projection Method
(a) (b)
component number
log
of
the
component
RHS using Projection Method II
component number
log
of
the
component
RHS using the previous solution as initial guess
(c) (d)
Figure
4: (Example 2) Exponentially-weighted RLS computations. Size distribution of the
components of (a) the original right hand side b (2) , (b) b (2) after using Projection Method I, (c)
b (2) after using Projection Method II, (d) b (2) \Gamma A (2) x (1) (using the previous solution as an initial
component number
log
of
the
component
RHS before projection
component number
log
of
the
component
RHS using Projection Method
(a) (b)
component number
log
of
the
component
RHS using Projection Method II
component number
log
of
the
component
RHS using the previous solution as initial guess
(c) (d)
Figure
5: (Example 2) Sliding window RLS computations. Size distribution of the components of
(a) the original right hand side b (2) , (b) b (2) using Projection Method I, (c) b (2) using Projection
we therefore only apply Projection Method II to solve the multiple linear systems. However, the
same phenomena as in Examples 1 and 2 is observed in these three examples as well.
Example 3 In this example, we consider a discrete ill-posed problem, which
is a discretization of a Fredholm integral equation of the first kind
a
The particular integral equation that we shall use is a one dimensional model problem in image
reconstruction [7] where an image is blurred by a known point-spread function. The desired
solution f is given by
while the kernel K is the point spread function of an infinitely long slit given by
ae sin[-(sin s
We use collocation with n (=64) equidistantly spaced points in [\Gamma-=2; -=2] to derive the matrix
A and the exact solution x. Then we compute the exact right-hand sides
perturb it by uncorrelated errors (white noise) normally distributed with zero mean and standard
derivation 10 \Gamma4 . Here we choose a matrix D equal to the second derivative operator (D =
Different regularization parameters - are used to compute the L-curve
(see
Figure
and test the performance of the Projection Method II for solving multiple linear
systems
s:
We emphasize that the consecutive systems do not differ by the scaled identity matrix.
Table
5 shows the number of iterations required for convergence of all 10 systems using
Projection Method II and using the previous solution as initial guess having the same residual
norm. We see that the projection method requires 288 matrix-vector multiplies to solve all
the systems, but the one using the previous solution as initial guess requires 365 matrix-vector
multiplies. In particular, the tenth system can be solved without restarting the conjugate
gradient process after the projection.
Example 4 We consider the integral equation
Z 2-f(t)dt
corresponding to the Dirichlet problem for the Laplace equation in the interior of an ellipse with
semiaxis c - d ? 0. We solve the case where the unique solution and the right-hand side are
given by
Linear Systems (1) (2) (3) (4) (5) (6) (7) (8)
Starting with Projection 79 38 33 25 27 26 23 21 15 1 288
Method II
Starting with previous 79 44 37 34
solution
Table
5: (Example 3) Number of matrix-vector multiplies required for convergence of all the
systems with -
6.4 6.5 6.6 6.7 6.8 6.9 7 7.1 7.2
least squares residual norm
solution
semi-norm
Figure
(Example 3) The Tikhonov L-curve with regularization parameters used in Table 4.
# of matrix-vector multiply
log
of
residual
norm
Figure
7: (Example 4) The convergence behaviour of all the systems (i)
2:3822 and
d)=(c+d). The coefficient matrices A (k) and the right hand sides b
are obtained by discretization of the integral equation (4.22). The size of all systems is 100.
The values of c and d are arbitrary chosen from the intervals [2; 5] and [0; 1] respectively. We
emphasize that in this example, the consecutive discretized systems do not differ by low rank or
small norm matrices.
The convergence behaviour of all the systems is shown in Figure 7. In the plot, each steepest
declining line denotes the convergence of a seed and also for the non-seed in the last restart.
Note that we plot the residual norm against the cost (the number of matrix-vector multiply)
in place of the iteration number so that we may compare the efficiency of these methods. We
remark that the shape of the plot obtained is similar to those numerical results given in [6] for
the Galerkin projection method for solving linear systems with multiple right hand sides. If
we use the solution of the second system as an initial guess for the third system, the number
of iteration required is 13. However, the number of iteration required is just 8 for Projection
Method II to have the same residual norm as that of the previous solution method; see Figure 8.
Figure
9 shows the components of the corresponding right-hand side of the third system before
the Galerkin projection, after the projection and using the previous solution as initial guess.
The figure clearly reveals that the eigenvector components of b (3) are effectively reduced after
the projection.
Example 5 The matrices for the final set of experiments corresponding to the
three-point centered discretization of the operator \Gamma d
dx (a(x) du
dx ) in [0; 1] where the function a(x)
is given by and d are two parameters. The discretization is performed
using a grid size of h = 1=65, yielding matrices of size 64 with different values of c and d. The
right hand sides of these systems are generated randomly with its 2-norm being 1. We remark
that the consecutive linear systems do not differ by low rank or small norm matrices in this
# of CG iteration
log
of
residual
norm
(a) (b)
(c)
Figure
8: (Example 4) The convergence behaviour of the third system, (a) with projected
solution as initial guesses, (b) with previous solution vector as initial guess and (c) with random
vector as initial guess.
Linear Systems (1) (2) (3) (4) (5) (6) (7) (8)
Starting with Projection 83
Method II
Starting with previous
solution
Table
(Example 5) Number of matrix-vector multiplies required for convergence of all the
systems with c
example.
Table
6 shows the number of iterations required for convergence of all the systems using
Projection Method II and using previous solution as initial guess having the same residual
norm. We observe from the results that the one using the projected solution as the initial
guess converges faster than that using the previous solution as initial guess. Figure 10 shows
the components of the corresponding right-hand side of the seveth system before the Galerkin
projection and after the projection. Again, it illustrates that the projection can reduce the
eigenvector components effectively.
e
components
RHS before projection
(a)
component number
log
of
the
component
RHS after using Projection Method II
component number
log
of
the
component
RHS using the previous solution as initial guess
(b) (c)
Figure
9: (Example distribution of the components of (a) the original right hand side
b (3) , (b) b (3) after Galerkin projection, (c) b (3) \Gamma A (3) x (2) (using the previous solution as an initial
RHS before projection
(a)
component number
log
of
the
component
RHS after using Projection Method II
component number
log
of
the
component
RHS using the previous solution as initial guess
(b) (c)
Figure
10: (Example 5) Size distribution of the components of (a) the original right hand side
b (7) , (b) b (7) after Galerkin projection and (c) b (using the previous solution as an
Concluding Remarks
In this paper, we developed Galerkin projection methods for solving multiple linear systems.
Experimental results show that the method is an efficient method. We end with concluding
remarks about the extensions of the Galerkin projection method.
1. A block generalization of the Galerkin projection method can be employed in many appli-
cations. The method is to select more than one system as seed so that the Krylov subspace
generated by the seed is larger and the initial guess obtained from the Galerkin projection
onto this subspace is expected to be better. One drawback of the block method is that
it may break down when singularity of the matrices occurs arising from the conjugate
gradient process. For details about block Galerkin projection methods, we refer to Chan
and Wan [6].
2. The literature for nonsymmetric systems with multiple right-hand sides is vast. Two
methods that have been proposed are block generalizations of solvers for nonsymmetric
systems; the block biconjugate gradient algorithm [17, 16], block GMRES [25], block QMR
[4, 9]. Recently, Simoncini and Gallopoulos [23] proposed a hybrid method by combining
the Galerkin projection process and Rishardson acceleration technique to speed up the
convergence rate of the conjugate gradient process. In the same spirit, we can modify
the above Galerkin projection algorithms to solve nonsymmetric systems with multiple
coefficient matrices and right-hand sides.
--R
regularization and super- resolution
Springer Verlag
A Block QMR Method for Computing Multiple Simultaneous Solutions to Complex Symmetric Systems
Generalization of Strang's Preconditioner with Applications to Toeplitz Least Squares Problems
Analysis of Projection Methods for Solving Linear Systems with Multiple Right-hand Sides
Analysis of Discrete Ill-posed Problems by Means of the L-curve
Block Implementation of Adaptive Digital Filters
A Block-QMR Algorithm for Non-Hermitian Linear Systems with Multiple Right-Hand Sides
Matrix Computations
Some Modified matrix Eigenvalue Problems
The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind
Fundamentals of Digital Image Processing
Fast RLS Adaptive Filtering by FFT-Based Conjugate Gradient Iterations
Variable Block CG Algorithms for Solving Large Sparse Symmetric Positive Definite Linear Systems on Parallel Computers
The block conjugate gradient algorithm and realted methods
A New Implementation of the Lanczos Method in Linear Problems
A New Look at the Lanczos Algorithm for Solving Symmetric Systems of Linear Equations
On the Lanczos Method for Solving Symmetric Linear Systems with Several Right-Hand Sides
A Memory-conserving Hybrid Method for Solving Linear Systems with Multiple Right-hand Sides
An Iterative Method for Nonsymmetric Systems with Multiple Right-hand Sides
A Conjugate Gradient Algorithm for the Treatment of Multiple Incident Electromagnetic Fields
Etude de quelques m'ethodes de r'esolution de probl'emes lin'eaires de grande taille sur multiprocesseur
An Iteration Solution Method for Solving f(A)
--TR | krylov space;conjugate gradient method;multiple linear systems;galerkin projection |
339901 | A Fast Algorithm for Deblurring Models with Neumann Boundary Conditions. | Blur removal is an important problem in signal and image processing. The blurring matrices obtained by using the zero boundary condition (corresponding to assuming dark background outside the scene) are Toeplitz matrices for one-dimensional problems and block-Toeplitz--Toeplitz-block matrices for two-dimensional cases. They are computationally intensive to invert especially in the block case. If the periodic boundary condition is used, the matrices become (block) circulant and can be diagonalized by discrete Fourier transform matrices. In this paper, we consider the use of the Neumann boundary condition (corresponding to a reflection of the original scene at the boundary). The resulting matrices are (block) Toeplitz-plus-Hankel matrices. We show that for symmetric blurring functions, these blurring matrices can always be diagonalized by discrete cosine transform matrices. Thus the cost of inversion is significantly lower than that of using the zero or periodic boundary conditions. We also show that the use of the Neumann boundary condition provides an easy way of estimating the regularization parameter when the generalized cross-validation is used. When the blurring function is nonsymmetric, we show that the optimal cosine transform preconditioner of the blurring matrix is equal to the blurring matrix generated by the symmetric part of the blurring function. Numerical results are given to illustrate the efficiency of using the Neumann boundary condition. | Introduction
A fundamental issue in signal and image processing is blur removal. The signal or image obtained
from a point source under the blurring process is called the impulse response function or the
point spread function. The observed signal or image g is just the convolution of this blurring
function h with the "true" signal or image f . The deblurring problem is to recover f from the
blurred function g given the blurring function h. This basic problem appears in many forms in
signal and image processing [2, 5, 12, 14].
In practice, the observed signal or image g is of finite length (and width) and we use it to
recover a finite section of f . Because of the convolution, g is not completely determined by
f in the same domain where g is defined. More precisely, if a blurred signal g is defined on
the interval [a; b] say, then it is not completely determined by the values of the true signal f
on [a; b] only. It is also affected by the values of f close to the boundary of [a; b] because of
the convolution. The size of the interval that affects g depends on the support of the blurring
function h. Thus in solving f from a finite length g, we need some assumptions on the values of
f outside the domain where g is defined. These assumptions are called the boundary conditions.
The natural and classical approach is to use the zero (Dirichlet) boundary condition [2,
pp.211-220]. It assumes that the values of f outside the domain of consideration are zero. This
results in a blurring matrix which is a Toeplitz matrix in the 1-dimensional case and a block-
Toeplitz-Toeplitz-block matrix in the 2-dimensional case, see [2, p.71]. However, these matrices
are known to be computationally intensive to invert, especially in the 2-dimensional case, see [2,
p.126]. Also ringing effects will appear at the boundary if the data are indeed not close to zero
outside the domain.
One way to alleviate the computational cost is to assume the periodic boundary condition,
i.e., data outside the domain of consideration are exact copies of data inside [12, p.258]. The
resulting blurring matrix is a circulant matrix in the 1-dimensional case and a block-circulant-
circulant-block matrix in the 2-dimensional case. These matrices can be diagonalized by discrete
Fourier matrices and hence their inverses can easily be found by using the Fast Fourier Transforms
see [12, p.258]. However, ringing effects will also appear at the boundary unless
f is close to periodic, and that is not common in practice.
In the image processing literature, other methods have also been proposed to assign boundary
values, see Lagendijk and Biemond [18, p.22] and the references therein. For instance, the
boundary values may be fixed at a local image mean, or they can be obtained by a model-based
extrapolation. In this paper, we consider the use of the Neumann (reflective) boundary condition
for image restoration. It sets the data outside the domain of consideration as reflection of the
data inside. The Neumann boundary condition has been studied in image restoration [21, 3, 18]
and in image compression [25, 20]. In image restoration, the boundary condition restores a
balance that is lost by ignoring the energy that spreads outside of the area of interest [21],
and also minimizes the distortion at the borders caused by deconvolution algorithms [3]. This
approach can also eliminate the artificial boundary discontinuities contributed to the energy
compaction property that is exploited in transform image coding [20].
The use of the Neumann boundary condition results in a blurring matrix that is a Toeplitz-
plus-Hankel matrix in the 1-dimensional case and a block Toeplitz-plus-Hankel matrix with
Toeplitz-plus-Hankel blocks in the 2-dimensional case. Although these matrices have more
complicated structures, we show that they can always be diagonalized by the discrete cosine
transform matrix provided that the blurring function h is symmetric. Thus their inverses can be
obtained by using fast cosine transforms (FCTs). Because FCT requires only real multiplications
and can be done at half of the cost of FFT, see [23, pp.59-60], inversion of these matrices is
faster than that of those matrices obtained from either the zero or periodic boundary conditions.
We also show that the use of the Neumann boundary condition provides an easy way of
estimating the regularization parameter when using the generalized cross-validation. We remark
that blurring functions are usually symmetric, see [14, p.269]. However, in the case where the
blurring function is nonsymmetric, we show that the optimal cosine transform preconditioner
[6] of the blurring matrix is generated by the symmetric part of the blurring function. Thus if
the blurring function is close to symmetric, the optimal cosine transform preconditioner should
be a good preconditioner.
The outline of the paper is as follows. In x2, we introduce the three different boundary con-
ditions. In x3, we show that symmetric blurring matrices obtained from the Neumann boundary
condition can always be diagonalized by the discrete cosine transform matrix. In x4, we show
that using the Neumann boundary condition, the generalized cross-validation estimate of the
regularization parameter can be done in a straightforward way. In x5, we give the construction
of the optimal cosine transform preconditioners for the matrices generated by nonsymmetric
blurring functions. In x6, we illustrate by numerical examples from image restorations that our
algorithm is efficient. Concluding remarks are given in x7.
2 The Deblurring Problem
For simplicity, we begin with the 1-dimensional deblurring problem. Consider the original signal
and the blurring function given by
The blurred signal is the convolution of h and ~ f , i.e., the i-th entry g i of the blurred signal is
given by
The deblurring problem is to recover the vector given the blurring function
h and a blurred signal of finite length. From (2), we
f \Gammam+1
f \Gammam+2
Thus the blurred signal g is determined not by f only, but by (f
. The linear system (3) is underdetermined. To overcome this, we make certain assumptions
(called boundary conditions) on the unknown data f \Gammam+1
so as to reduce the number of unknowns.
Before we discuss the boundary conditions, let us first rewrite (3) as
where
f \Gammam+1
f \Gammam+2
hm
. h \Gammam
2.1 The Zero (Dirichlet) Boundary Condition
The zero (or Dirichlet) boundary condition assumes that the signal outside the domain of the
observed vector g is zero [2, pp.211-220], i.e.,
the zero vector. The matrix system in (4) becomes
We see from (6) that the coefficient matrix T is a Toeplitz matrix.
There are many iterative or direct Toeplitz solvers that can solve the Toeplitz system (8)
with costs ranging from O(n log n) to O(n 2 ) operations, see for instance [19, 16, 1, 7]. In the 2-
dimensional case, the resulting matrices will be block-Toeplitz-Toeplitz-block matrices. Inversion
of these matrices is known to be very expensive, e.g. the fastest direct Toeplitz solver is of O(n 4 )
operations for an n 2 -by-n 2 block-Toeplitz-Toeplitz-block matrix, see [17].
2.2 The Periodic Boundary Condition
For practical applications, especially in the 2-dimensional case, where we need to solve the system
efficiently, one usually resort to the periodic boundary condition. This amounts to setting
in (3), see [12, p.258]. The matrix system in (4) becomes
are n-by-n Toeplitz matrices obtained by augmenting (n \Gamma m) zero
columns to T l and T r respectively.
The most important advantage of using the periodic boundary condition is that B so obtained
is a circulant matrix. Hence B can be diagonalized by the discrete Fourier matrix and (9) can
be solved by using three fast Fourier transforms (FFTs) (one for finding the eigenvalues of the
matrix B and two for solving the system, cf (15) below). Thus the total cost is of O(n log n)
operations.
In the 2-dimensional case, the blurring matrix is a block-circulant-circulant-block matrix
and can be diagonalized by the 2-dimensional FFTs (which are tensor-products of 1-dimensional
FFTs) in O(n 2 log n) operations.
2.3 The Neumann Boundary Condition
For the Neumann boundary condition, we assume that the data outside f are reflection of the
data inside f . More precisely, we set
and
in (3). Thus (4) becomes
where J is the n-by-n reversal matrix.
We remark that the coefficient matrix A in (10) is neither Toeplitz nor circulant. It is a
Toeplitz-plus-Hankel matrix. Although these matrices have more complicated structures, we
will show in x3 that the matrix A can always be diagonalized by the discrete cosine transform
matrix provided that the blurring function h is symmetric, i.e., h
It follows that (10) can be solved by using three fast cosine transforms (FCTs) in O(n log n)
operations, see (15) below. This approach is computationally attractive as FCT requires only
real operations and is about twice as fast as the FFT, see [23, pp.59-60]. Thus solving a problem
with the Neumann boundary condition is twice as fast as solving a problem with the periodic
boundary condition.
We will establish similar results in the 2-dimensional case, where the blurring matrices will
be block Toeplitz-plus-Hankel matrices with Toeplitz-plus-Hankel blocks.
3 Diagonalization of the Neumann Blurring Matrices
3.1 One-Dimensional Problems
We first review some definitions and properties of the discrete cosine transform matrix. Let C
be the n-by-n discrete cosine transform matrix with entries
r
is the Kronecker delta, see [14, p.150]. We note that C is orthogonal, i.e., C t
Also, for any n-vector v, the matrix-vector multiplications Cv and C t v can be
computed in O(n log n) real operations by FCTs; see [23, pp.59-60].
Let C be the space containing all matrices that can be diagonalized by C, i.e.
is an n-by-n real diagonal matrixg: (12)
to both sides of we see that
the eigenvalues [ ] i;i of Q are given by
Hence, the eigenvalues of Q can be obtained by taking an FCT of the first column of Q. In
particular, any matrix in C is uniquely determined by its first column.
Next we give a characterization of the class of matrices C. Let us define the shift of any vector
(v) to be the n-by-n symmetric
Toeplitz matrix with v as the first column and H(x; y) to be the n-by-n Hankel matrix with x
as the first column and y as the last column.
Lemma 1 (Chan, Chan, and Wong [6], Kailath and Olshevsky [15], Martucci [22]
and Sanchez et al. [24]) Let C be the class of matrices that can be diagonalized by the discrete
cosine transform matrix C. Then
It follows from Lemma 1 that matrices that can be diagonalized by C are some special
Toeplitz-plus-Hankel matrices.
Theorem 1 Let the blurring function h be symmetric, i.e., h Then the matrix
A given in (10) can be written as
where In particular, A can be diagonalized by C.
Proof: By (10), . By (6), it is clear that T is equal to T (u). From
the definitions of T l and T r in (5) and (7), it is also obvious that
Hence
By Theorem 1, the solution f of (10) is given by
where is the diagonal matrix holding the eigenvalues of A. By (13), can be obtained in one
FCT. Hence f can be obtained in three FCTs.
We remark that from (14), it is straightforward to construct the Neumann blurring matrix
A from the Dirichlet blurring matrix (6). All we need is to reflect the first column
of T to get the Hankel matrix H(oe(u); J oe(u)) and add it to T . Clearly the storage requirements
of both matrices A and T are the same - we only need to store the first column.
3.2 Two-Dimensional Problems
The results of x3.1 can be extended in a natural way to 2-dimensional image restoration problems.
In this case, one is concerned with solving a least squares problem similar to that in (3), except
that the matrix is now a block matrix. For the zero boundary condition, the resulting blurring
matrix is a block-Toeplitz-Toeplitz-block matrix of the form
where each block T (j) is a Toeplitz matrix of the form given in (7). The first column and row
of T in (16) are completely determined by the blurring function of the blurring process.
With the Neumann boundary condition, the resulting matrix A is a block Toeplitz-plus-
Hankel matrix with Toeplitz-plus-Hankel blocks. More precisely,
A (m) . A (\Gammam)
A (m\Gamma1) A (\Gammam+1)
with each block A (j) being an n-by-n matrix of the form given in (10). We note that the A (j) in
(17) and the T (j) in (16) are related by (14). Thus again it is straightforward to construct the
blurring matrix A from the matrix T or from the blurring function directly. Obviously, storage
requirements of A and T are the same.
We next show that for a symmetric blurring function, the blurring matrix A in (17) can be
diagonalized by the 2-dimensional discrete cosine transform matrix. Hence inversion of A can
be done by using only three 2-dimensional FCTs.
Theorem 2 If the blurring function h i;j is symmetric, i.e.,
for all i and j, then A can be diagonalized by the 2-dimensional discrete cosine transform matrix
C\Omega C,
where\Omega is the tensor product.
Proof: We note that
(C\Omega I)(I\Omega C). Since each block A (j) in (17) is of the form given
by (14), by Theorem 1, A (j) can be diagonalized by C, i.e.,
It follows that
(C\Omega C)A(C
(C\Omega I)(I\Omega C)A(I\Omega C t )(C
t\Omega I)
(C\Omega I ) (C
t\Omega I)
where
(m) . (m)
Let P be the permutation matrix that satisfies
i.e. the (i; j)th entry of the (k; ')th block in is permuted to the (k; ')th entry of the (i; j)th
block. Then we have P t
(I\Omega C) and
~
A (1) 0
~
A (2)
From (18), each matrix ~
A (j) has the same form as A in (14). In particular, for all j, C ~
~
(j) , a diagonal matrix. Thus
(C\Omega C)A(C
(C\Omega I ) (C
t\Omega I)
~
~
(2)
which is a permutation of a diagonal matrix and hence is still a diagonal matrix.
4 Estimation of Regularization Parameters
Besides the issue of boundary conditions, it is well-known that blurring matrices are in general
ill-conditioned and deblurring algorithms will be extremely sensitive to noise [12, p.282]. The
ill-conditioning of the blurring matrices stems from the wide range of magnitudes of their eigen-values
[10, p.31]. Therefore, excess amplification of the noise at small eigenvalues can occur. The
method of regularization is used to achieve stability for deblurring problems. In the classical
regularization [10, p.117], stability is attained by introducing a regularization operator
which restricts the set of admissible solutions. More specifically, the regularized solution f (-)
is computed as the solution to
min
The term kDf(-)k 2
2 is added in order to regularize the solution. The regularization parameter
- controls the degree of regularity (i.e., the degree of bias) of the solution.
One can find the solution f (-) in (19) by solving the normal equations
Usually, kDfk 2 is chosen to be the L 2 norm kfk 2 or the H 1 norm kLfk 2 where L is the first
order difference operator matrix, see [14, 8, 12]. Correspondingly, the matrix D t D in (20) is
the identity matrix or the discrete Laplacian matrix with some boundary conditions. In the
latter case, if the zero boundary condition is imposed, D t D is just the discrete Laplacian with
the Dirichlet boundary condition. For the periodic boundary condition, D t D is circulant and
can be diagonalized by the FFTs, see for instance [12, p.283]. For the Neumann boundary
condition, D t D is the discrete Laplacian with the Neumann boundary condition, which can be
diagonalized by the discrete cosine transform matrix, see for instance [4]. Thus if we use the
Neumann boundary condition for both the blurring matrix A and the regularization operator
D, then the matrix in (20) can be diagonalized by the discrete cosine transform matrix and
hence its inversion can still be done in three FCTs for any fixed -, cf. (15).
Another difficulty in regularization is the choice of -. Generalized cross-validation [11] is
a technique that estimates - directly without requiring an estimate of the noise variance. It
is based on the concept of prediction errors. For each
- be the vector that
minimizes the error measure:
is the ith element of Af and [g] i is the ith element of g. If - is such that f k
- is a good
estimate of f , then [Af k
should be a good approximation of [g] k on average. For a given -, the
average squared error between the predicted value [Af k
and the actual value [g] k is given byn
The generalized cross-validation (GCV) is a weighted version of the above error:
where m jj (-) is the (j; j)th entry of the so-called influence matrix
In [11], Golub et al. have shown that v(-) can be written as
The optimal regularization parameter is chosen to be the - that minimizes v(-). Since v(-)
is a nonlinear function, the minimizer usually cannot be determined analytically. However, if
the Neumann boundary condition is used for both A and D t D, we can rewrite v(-) as
where ff i and fi i represent the eigenvalues of A and D t D respectively. We recall that ff i and
can be obtained by taking the FCT of the first column of A and D t D respectively since the
matrices can be diagonalized by the discrete cosine transform matrix C. Thus the GCV estimate
of - can be computed in a straightforward manner, see [13].
For the periodic boundary condition, the GCV estimate can also be computed by a similar
procedure. However, if we use the zero boundary condition, determining the GCV estimate of -
will require the inversion of a large matrix which is clearly an overwhelming task for any images
of reasonable size.
5 Optimal Cosine Transform Preconditioners
Because all matrices in C are symmetric (see (12)), discrete cosine transform matrices can only
diagonalized blurring matrices from symmetric blurring functions. For nonsymmetric blurring
functions, matrices in C may be used as preconditioners to speed up the convergence of iterative
methods such as the conjugate gradient method. Given a matrix A, we define the optimal cosine
transform preconditioners c(A) to be the minimizer of kQ \Gamma
is the Frobenius norm.
In [6, 16], c(A) are obtained by solving linear systems. Here we give a simple approach for
finding c(A).
Theorem 3 Let h be an arbitrary blurring function and A be the blurring matrix of h with the
Neumann boundary condition imposed. Then the optimal cosine transform preconditioner c(A)
of A can be found as follows:
1. In the one-dimensional case, c(A) is the blurring matrix corresponding to the symmetric
blurring function s with the Neumann boundary condition imposed.
2. In the 2-dimensional case, c(A) is the blurring matrix corresponding to the symmetric
blurring function given by s i;j j (h i;j +h i;\Gammaj +h \Gammai;j +h \Gammai;\Gammaj )=4 with the Neumann boundary
condition imposed.
Proof: We only give the proof for the one-dimensional case. The proof for the two-dimensional
case is similar. We first note that if U and V are symmetric and skew-symmetric matrices
respectively, then kU
F . Hence, for any
Since the second term in the right hand side above does not affect the diagonal matrix , the
minimizer is given by the diagonal of
It is easy to check that2
where
and
We claim that the diagonal entries [CH(v; \GammaJ v)]C t this is true, then the
minimizer is given by
and Theorem 3 then follows directly from Theorem 1.
To prove our claim, we first note that by (11),
r
cos
Also it is clear that T j JH(v; \GammaJ v) is a skew-symmetric Toeplitz matrix. Therefore
In view of Theorem 3 and the results we have in x3, it is easy to find c(A) for blurring
matrices generated by nonsymmetric blurring functions. We just take the symmetric part of the
blurring functions and form the (block) Toeplitz-plus-Hankel matrices as given in (10) or (17).
From Theorem 3, we also see that if the blurring function is close to symmetric, then c(A) will
be a good approximation (hence a good preconditioner) to A, see the numerical results in x6.
6 Numerical Experiments
In this section, we illustrate the efficiency of employing the Neumann boundary condition over
the other two boundary conditions for image restoration problems. All our tests were done by
Matlab. The data source is a photo from the 1964 Gatlinburg Conference on Numerical Algebra
taken from Matlab. From (4), we see that to construct the right hand side vector g correctly, we
need the vectors f l and f r , i.e., we need to know the image outside the given domain. Thus we
Figure
1: The "Gatlinburg Conference" image.1020150.0050.0150.0250.0351020515250.010.020.03
(a) (b) (c) (d)
Figure
2: (a) Gaussian blur; (b) out-of-focus blur; (c) noisy and blurred image by Gaussian blur
and (d) by out-of-focus blur.
start with the 480-by-640 image of the photo and cut out a 256-by-256 portion from the image.
Figure
1 gives the 256-by-256 image of this picture.
We consider restoring the "Gatlinburg Conference" image blurred by the following two blurring
functions, see [14, p.269]:
(i) a truncated Gaussian blur:
ae
ce
(ii) an out-of-focus blur:
ae c; if
where h i;j is the jth entry of the first column of T (i) in (16) and c is the normalization constant
such that
We remark that the Gaussian blur is symmetric and separable whereas
the out-of-focus blur is symmetric but not separable, see Figures 2(a) and 2(b). A Gaussian
white noise n with signal-to-noise ratio of 50dB is then added to the blurred images. The noisy
blurred images are shown in Figures 2(c) and 2(d). We note that after the blurring, the cigarette
held by Prof. Householder (the rightmost person) is not clearly shown, cf. Figure 1.
We remark that the regularization parameter based on the generalized cross-validation
method (see x4) is not suitable when the zero boundary condition is imposed. The method
will require the inversion of many large matrices. As a comparison of the cost among different
boundary conditions, we chose the optimal regularization parameter - such that it minimizes
the relative error of the reconstructed image f (-) which is defined as kf \Gamma f (-)k 2 =kfk 2 where f
is the original image. The optimal - , accurate up to one significant digit, is obtained by trial
and error.
In
Figures
3 and 4, we present the restored images for the three different boundary conditions,
the optimal - and the relative errors. We used the L 2 norm as the regularization functional
here. We see from the figures that by imposing the Neumann boundary condition, the relative
error and the ringing effect are the smallest. Also the cigarette is better reconstructed by using
the Neumann boundary condition than by those using the other two boundary conditions.
rel.
Figure
3: Restoring Gaussian blur with zero (left), periodic (middle) and Neumann (right)
boundary conditions.
rel.
Figure
4: Restoring out-of-focus blur with zero (left), periodic (middle) and Neumann (right)
boundary conditions.
Next let us consider the cost. Recall that for each -, we only need three 2-dimensional FFTs
and FCTs to compute the restored images for the periodic and the Neumann boundary conditions
respectively. Thus the costs for both approaches are about O(n 2 log n) operations though the
Neumann one is twice as fast because FCT requires only real multiplications [23, pp.59-60]. For
the zero boundary condition, we have to solve a block-Toeplitz-Toeplitz-block system for each -.
The fastest direct Toeplitz solver requires O(n 4 ) operations, see [17]. In our tests, the systems
Blurring function 3 \Theta
Gaussian 67
Out-of-focus 81 73 48 43 26
Blurring function 3 \Theta
Gaussian 17 17 13 12 7 7
Out-of-focus 22 21 15 14 9 9
Table
1: The numbers of iterations required for using the zero boundary condition.
are solved by the preconditioned conjugate gradient method with circulant preconditioners [7].
Table
1 shows the numbers of iterations required for the two blurring functions for different -.
The stopping tolerance is 10 \Gamma6 . We note that the cost per iteration is about four 2-dimensional
FFTs. Thus the cost is extremely expensive especially when - is small. In conclusion, we see
that the cost of using the Neumann boundary condition is lower than that of using the other
two boundary conditions.
Finally we illustrate the effectiveness of the optimal cosine transform preconditioners for
blurring functions that are close to symmetric. More general tests on the preconditioners are
given in [6]. We consider a 2-dimensional deconvolution problem arising in the ground-based
atmospheric imaging. Figure 5(a) gives the 256-by-256 blurred and noisy image of an ocean
reconnaissance satellite observed by a ground-based imaging system and Figure 5(b) is a 256-
by-256 image of a guide star observed under similar circumstances, see [8]. The discrete blurring
function h is given by the pixel values of the guide star image. The blurring matrix A is obtained
as in (17) by imposing the Neumann boundary condition.
50 100 150 200 2500.40.8
Figure
5: (a) Observed image, (b) the guide star image and (c) a cross-section of the blurring
function.
We note that the blurring function is not exactly symmetric in this case, see Figure 5(b).
However, from the cross-sections of the blurring function (see for instance Figure 6(c)), we know
that it is close to symmetric. Therefore, we used the preconditioned conjugate gradient algorithm
with the optimal cosine transform preconditioner to remove the blurring, see x5. Again we use
the L 2 norm as the regularization functional here. In Figure 6(a), we present the restored image
with the optimal - . The original image is given in Figure 6(b) for comparison.
Figure
(a) Restored image (- (b) the true image.
Table
2: The number of iterations for convergence.
Using the optimal cosine transform preconditioner, the image is restored in 4 iterations for
a stopping tolerance of 10 \Gamma6 . If no preconditioner is used, acceptable restoration is achieved
after 134 iterations, see Table 2. We remark that the cost per iteration for using the optimal
cosine transform preconditioner is almost the same as that with no preconditioner: they are
about 1:178 \Theta 10 8 and 1:150 \Theta 10 8 floating point operations per iteration respectively. Thus
we see that the preconditioned conjugate gradient algorithm with the optimal cosine transform
preconditioner is an efficient and effective method for this problem.
7 Concluding Remarks
In this paper, we have shown that discrete cosine transform matrices can diagonalize dense
Toeplitz-plus-Hankel blurring matrices arising from using the Neumann (reflective)
boundary condition. Numerical results suggest that the Neumann boundary condition provides
an effective model for image restoration problems, both in terms of the computational cost
and of minimizing the ringing effects near the boundary. It is interesting to note that discrete
sine transform matrices can diagonalize Toeplitz matrices with at most 3 bands (such as the
discrete Laplacian with zero boundary conditions) but not dense Toeplitz matrices in general,
see [9] for instance.
--R
Superfast Solution of Real Positive Definite Toeplitz Systems
IEEE Signal Processing Mag- azine
Fast Computation of a Discretized Thin-plate Smoothing Spline for Image Data
Based Preconditioners for Total Variation Deblurring
Conjugate Gradient Methods for Toeplitz Systems
Generalization of Strang's Preconditioner with Applications to Toeplitz Least Squares Problems
Sine Transform Based Preconditioners for Symmetric Toeplitz Systems
Regularization of Inverse Problems
Generalized Cross-validation as a Method for Choosing a Good Ridge Parameter
New York
a Matlab Package for Analysis and Solution of Discrete Ill-Posed Problems
Fundamentals of
Theory and Applications
Fast Algorithms for Block Toeplitz Matrices with Toeplitz Entries
Iterative Identification and Restoration of Images
The Wiener RMS (Root Mean Square) Error Criterion in Filter Design and Prediction
Reducing Boundary Distortion in Image Restoration
Symmetric Convolution and the Discrete Sine and Cosine Transforms
Diagonalization Properties of the Discrete Cosine Transforms
The
--TR
--CTR
Ben Appleton , Hugues Talbot, Recursive filtering of images with symmetric extension, Signal Processing, v.85 n.8, p.1546-1556, August 2005
Marco Donatelli , Claudio Estatico , Stefano Serra-Capizzano, Boundary conditions and multiple-image re-blurring: the LBT case, Journal of Computational and Applied Mathematics, v.198 n.2, p.426-442, 15 January 2007
Michael K. Ng , Andy M. Yip, A Fast MAP Algorithm for High-Resolution Image Reconstruction with Multisensors, Multidimensional Systems and Signal Processing, v.12 n.2, p.143-164, April 2001
Daniela Calvetti, Preconditioned iterative methods for linear discrete ill-posed problems from a Bayesian inversion perspective, Journal of Computational and Applied Mathematics, v.198 n.2, p.378-395, 15 January 2007
Jian-Feng Cai , Raymond H. Chan , Carmine Fiore, Minimization of a Detail-Preserving Regularization Functional for Impulse Noise Removal, Journal of Mathematical Imaging and Vision, v.29 n.1, p.79-91, September 2007
Michael K. Ng , Andy C. Yau, Super-Resolution Image Restoration from Blurred Low-Resolution Images, Journal of Mathematical Imaging and Vision, v.23 n.3, p.367-378, November 2005 | toeplitz matrix;circulant matrix;boundary conditions;cosine transform;deblurring;hankel matrix |
339903 | Ordering, Anisotropy, and Factored Sparse Approximate Inverses. | We consider ordering techniques to improve the performance of factored sparse approximate inverse preconditioners, concentrating on the AINV technique of M. Benzi and M. T\r{u}ma. Several practical existing unweighted orderings are considered along with a new algorithm, minimum inverse penalty (MIP), that we propose. We show how good orderings such as these can improve the speed of preconditioner computation dramatically and also demonstrate a fast and fairly reliable way of testing how good an ordering is in this respect. Our test results also show that these orderings generally improve convergence of Krylov subspace solvers but may have difficulties particularly for anisotropic problems. We then argue that weighted orderings, which take into account the numerical values in the matrix, will be necessary for such systems. After developing a simple heuristic for dealing with anisotropy we propose several practical algorithms to implement it. While these show promise, we conclude a better heuristic is required for robustness. | Introduction
. Consider solving the system of linear equations:
where A is a sparse nn matrix. Depending on the size of A and the nature of the
computing environment an iterative method, with some form of preconditioning to
speed convergence, is a popular choice. Approximate inverse preconditioners, whose
application requires only (easily parallelized) matrix-vector multiplication, are of particular
interest today. Several methods of constructing approximate inverses have been
proposed (e.g., [2, 3, 9, 20, 22, 24]), falling into two categories: those that directly
form an approximation to A -1 and those that form approximations to the inverses of
the matrix's LU factors. This second category currently shows more promise than the
first for three reasons. First, it is easy to ensure that the factored preconditioner is
nonsingular, simply by making sure both factors have nonzero diagonals. Second, the
factorization appears to allow more information per nonzero to be stored, improving
convergence [4, 8]. Third, the setup costs for creating preconditioners can often be
much less [4].
However, unlike A -1 itself, the inverse LU factors are critically dependent on the
ordering of the rows and columns-indeed, they will not exist in general for some
orderings. Even in the case of an SPD matrix, direct methods have shown how important
ordering can be. Thus any factored approximate inverse scheme must handle
# Received by the editors March 6, 1998; accepted for publication (in revised form) March 18, 1999;
published electronically November 17, 1999. This research was supported by the Natural Sciences and
Engineering Council of Canada, and Communications and Information Technology Ontario (CITO),
funded by the Province of Ontario.
http://www.siam.org/journals/sisc/21-3/33584.html
Stanford University, SCCM program, Room 284, Gates Building 2B, Stanford, CA 94305
# Department of Computer Science, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
(wptang@riacs.edu).
868 ROBERT BRIDSON AND WEI-PAI TANG
ordering with thought. In particular, for an e#ective preconditioner an ordering that
minimizes the size of the "dropped" entries is needed-decreasing the error between
the approximate inverse factors and the true ones (see [14] for a discussion of this in
the context of ILU).
In this paper we focus our attention on the AINV algorithm [3], which, via implicit
Gaussian elimination with small-element dropping, constructs a factored approximate
where Z and W are unit upper triangular and D -1 is diagonal. However, the purely
structural results presented in section 2 apply equally to other factored approximate
inverse schemes. Whether the numerical results carry over is still to be determined.
For example, conflicting evidence has been presented in [5] and [16] about the e#ect
on FSAI [22], which perhaps will be resolved only when the issue of sparsity pattern
selection for FSAI has been settled.
Some preliminary work in studying the e#ect of ordering on the performance of
AINV has shown promising results [3]. more recent work by the same authors is
[5].) We carry this research forward in sections 2 and 3, realizing significant improvements
in the speed of preconditioner computation and observing some beneficial e#ects
on convergence but noting that structural information alone is not always enough. We
then turn our attention to anisotropic problems. For the ILU class of preconditioners
it has been determined that orderings which take the numerical values of the matrix
into account are useful-even necessary (e.g., [10, 11, 12, 14]). Sections 4-6 try to answer
the question of whether the same thing holds for factored approximate inverses.
The appendix contains the details of our test results.
2. Unweighted orderings. Intuitively, the smaller the size of the dropped portion
from the true inverse factors, the better the approximate inverse will be. We will
for now assume that the magnitudes of the inverse factors' nonzeros are distributed
roughly the same way under di#erent orderings. (Our experience shows this is a fairly
good assumption for typical isotropic problems, but as we shall see later, this breaks
down for anisotropic matrices in particular.) Then we can consider the simpler problem
of reducing the number of dropped nonzeros, instead of their size. Of course, for
sparsity we also want to retain as few nonzeros as possible; thus we really want to
reduce the number of nonzeros in the exact inverse factors-a quantity we call inverse
factor fill, or I F fill.
Definition 2.1. Let A be a square matrix with a triangular factorization
LU . The I F fill of A is defined to be the total number of nonzeros in the inverses of
L and U, assuming no cancellation in the forming of those inverses.
For simplicity we restrict our discussion to the SPD case, first examining I F
fill and then considering several existing ordering algorithms that may be helpful.
We finish the section by proposing a new ordering scheme, which we call MIP. The
application to the unsymmetric case is straightforward.
The following discussion makes use of some concepts from graph theory. The
graph of an n n matrix A is a directed graph on n nodes labeled 1, . , n, with
an arc i # j if and only if A ij #= 0. A directed path, or dipath, is an ordered set
such that the arcs all exist-often this is
written as u 1 # u k . See chapter 3 of [17], for example, for further explanation.
From Gilbert [19] and Liu [23], we have the following graph theoretic characterization
of the structure of the inverse Cholesky factor.
ORDERING, ANISOTROPY, AND APPROXIMATE INVERSES 869
Theorem 2.2. Let A be an SPD matrix with Cholesky factor L. Then assuming
no cancellation, the structure of -T corresponds to the transitive closure 1 of
the graph of L T , that is, for i < j, we have Z ij #= 0 if and only if there is a dipath
from i to j in the graph of L T . Furthermore, this is the same structure as given by
the transitive closure of the elimination tree of A (the transitive reduction 2 of L T ).
Notice that the last structure characterization simply means that for i > j,
is an ancestor of i in the elimination tree. This allows
us to significantly speed the computation of the preconditioner given a bushy elimination
tree, as well as allowing for parallelism-for the calculation of column j in the
factors, only the ancestor columns need be considered (a coarser-grain version of this
parallelism via graph partitioning has been successfully implemented in [6] for exam-
ple). Another product of this characterization is a simple way of computing the I F
fill of an SPD matrix, obtained by summing the number of nonzeros in each column
of the inverse factor and multiplying by two for the other (transposed) factor.
Theorem 2.3. The I F fill of an SPD matrix is simply twice the sum of the
depths of all nodes in its elimination tree. In particular, the number of nonzeros in
column j of the true inverse factor Z is given by the number of nodes in the subtree
of the elimination tree rooted at j.
These results suggest orderings that avoid long dipaths in L T (i.e., paths in L T
with monotonically increasing node indices), as these cause lots of I F fill, quadratic
in their length. Alternatively, we are trying to get short and bushy elimination trees.
Another useful characterization of I F fill using notions from [17, 18] allows us
to do a cheap "inverse symbolic factorization"-determining the nonzero structure
of the inverse factors-without using the elimination tree, which is essential for our
minimum inverse penalty (MIP) ordering algorithm presented later.
Theorem 2.4. Z ij #= 0 if and only if i is reachable from j strictly through nodes
eliminated previous to i-or in terms of the quotient graph model, if i is contained in
a supernode adjacent to j at the moment when j is eliminated.
Based on the heuristic and results above, we now examine several existing orderings
which might do well and propose a new scheme to directly implement the
heuristic of reducing I F fill.
Red-black. The simplest ordering we consider is (generalized) red-black, where a
maximal independent set of ("red") nodes is ordered first and the remaining ("black")
nodes are ordered next according to their original sequence. In that initial red block,
there are no nontrivial dipaths and hence no o#-diagonal entries in Z.
Minimum degree. Benzi and Tuma have observed that minimum degree is
generally beneficial for AINV [3]. This is justified by noting that minimum degree
typically substantially reduces the height of the elimination tree and hence should
reduce I F fill.
As an aside, notice that direct-method fill-reducing orderings do not necessarily
reduce I F fill. For example, a good envelope ordering will likely give rise to a very
tall, narrow elimination tree-typically just a path-and thus give full inverse factors.
Again, it should be noted that this isn't necessarily a bad thing if the inverse factors
still have very small entries, but without using numerical information from the matrix
our experience is that envelope orderings do not manage this. This may seem at
1 The transitive closure of a directed graph G is a graph G # on the same vertices with an arc
vertices u and v that were connected by a dipath u # v in G.
2 The transitive reduction of a directed graph G is a graph G # with the minimum number of arcs
but still possessing the same transitive closure as G.
variance with the result in [13] (elaborated in [5] for factored inverses) that for banded
SPD matrices the rate of decay of the entries in the inverse has an upper bound that
decreases as the bandwidth decreases. However, decay was measured there in terms of
distance from the diagonal, which is really suitable only for small bandwidth orderings;
the results presented in section 5 of [13], measuring decay in terms of the unweighted
graph distance, should make theoretical progress possible. We will return to this issue
in sections 4-6.
Nested dissection (ND). On the other hand, by ordering vertex separators last
ND avoids any long monotonic dipaths and hence a lot of I F fill. Alternatively, in
trying to balance the elimination tree it reduces the sum of depths.
Minimum inverse penalty (MIP). Above we noted that we can very cheaply
compute the number of nonzeros in each column of Z within a symbolic factoriza-
tion. This allows to propose a new ordering, MIP, an analogue to minimum degree.
Minimum degree is built around a symbolic Cholesky factorization of the matrix, at
each step selecting the node(s) of minimum penalty to eliminate. The penalty was
originally taken to be the degree of the node in the partially eliminated graph; later
algorithms have used other related quantities including the external degree and approximate
upper bounds. In MIP we follow the same greedy strategy but we compute
a penalty for node v based on Zdeg v , the number of nonzeros in the column of Z
were v to be ordered next-the degree in the inverse Cholesky factor instead of the
well as on Udeg v , the number of uneliminated neighbors of node
v at the current stage of factorization (not counting supernodes). In our experiments
we found the function Penalty to be fairly e#ective. Further
research into a better penalty function is needed. Also, ideas from minimum degree
such as multiple elimination, element absorption, etc. might be suitable here.
3. Testing unweighted orderings. We used the symmetric part of the matrix
for all the orderings. Red-black was implemented with the straightforward greedy
algorithm to select a maximal independent set. Our minimum degree algorithm was
AMDBAR, a top-notch variant due to Amestoy, Davis, and Du# [1]. We wrote our
own ND algorithm that constructs vertex separators from edge separators given by a
multilevel bisection algorithm. This algorithm coarsens the graph with degree-1 node
compression and heavy-edge matchings until there are less than 100 nodes, bisects
the small graph spectrally according to the Fiedler vector [15], and uses a greedy
boundary-layer sweep to smooth in projecting back to the original. See [21], for
example, for more details on this point.
The appendix provides further details about our testing. The tables contain data
for both the unweighted orderings above and their weighted counterparts presented
below-ignore the lower numbers for now. In brief, we selected several matrices from
the Harwell-Boeing collection and tested them all with each ordering scheme. Table
4 gives the true I F fill for each matrix (or its symmetric part) and ordering. Tables
5-7 give the preconditioner performance. As the number of nonzeros allowed in the
preconditioner can have a significant e#ect on results, we standardized all our test
runs: in each box the left number is a report for when the preconditioner had as
many nonzeros as the matrix and the right number for when the preconditioner had
twice as many nonzeros.
In terms of I F fill reduction, AMDBAR, ND, and MIP are always the best three
by a considerable factor. ND wins 15 times, AMDBAR 5 times, and MIP 3 times,
with one tie. Red-black beats the natural ordering but not dramatically.
It is clear that ordering can help immensely for accelerating the computation of
ORDERING, ANISOTROPY, AND APPROXIMATE INVERSES 871
Inverse fill
Time
to
compute
preconditioner
Fig. 1. Correlation of I F fill and preconditioner computing time, normalized with respect to
the given (original) ordering.
Table
Average decrease in number of iterations over the test set, in percentages of the iterations taken
by the given ordering.
Fill Red-black AMDBAR ND MIP
the preconditioner. ND is the winner, followed by AMDBAR, MIP, and then red-black
quite a bit behind. The preconditioner computing time is closely correlated to
I F fill-see Figure 1. Thus calculating I F fill provides a fast and reasonably good test
to indicate how e#cient an (unweighted) ordering is for preconditioner computation-
perhaps not an important point if the iteration time dominates the setup time, but
this may be useful for applications where the reverse is true.
The e#ect of ordering on speed of solution is less obvious. The poor behavior of
PORES2, 3 SHERMAN2, and WATSON5 indicate that AINV probably isn't appropriate
(although if we had properly treated SHERMAN2 as a block matrix instead
it might have gone better). Notice in particular that sometimes lowering the drop
tolerance, increasing the size of the preconditioner and hopefully making it more
accurate, actually degrades convergence for these indefinite problems. From the remaining
matrices, we compared the average decrease in number of iterations over the
given ordering-see Table 1. Particularly given its problems with SAYLR4, WAT-
SON4, WATT1, and WATT2 red-black cannot be viewed as a good ordering. MIP is
overall the best, although it had a problem with WATT1, whereas the close contender
AMDBAR did fairly well on all but the di#cult three mentioned above. The good I F
fill reducing orderings all made worthwhile improvements in convergence rates, but it
is surprising that ND did the least considering its superior I F fill reduction. Clearly
our intuition that having fewer nonzeros to drop makes a better preconditioner has
merit, but it does not tell the entire story.
3 In [3] somewhat better convergence was achieved for PORES2, presumably due to implementation
di#erences in the preconditioner or its application.
872 ROBERT BRIDSON AND WEI-PAI TANG
4. Anisotropy. In the preceding test results we find several exceptions to the
general rule that AMDBAR, ND, and MIP perform similarly, even ignoring PORES2,
SHERMAN2, and WATSON5. For example, there are considerable variations for each
of ALE3D, BCSSTK14, NASA1824, ORSREG1, SAYLR4, and WATSON4. More
importantly, these variations are not correlated with I F fill; some other factor is at
work. Noticing that each of these matrices are quite anisotropic and recalling the
problems anisotropy poses for ILU, we are led to investigate weighted orderings.
We first develop a heuristic for handling anisotropic matrices. The goal in mind
is to order using the di#ering strengths of connections to reduce the magnitude of
the inverse factor entries. Then even if we end up with more I F fill (and hence drop
more nonzeros), the magnitude of the discarded portion of the inverse factors may be
smaller and give a more accurate preconditioner.
Again, we only look at SPD A. Let A = LDL T be the modified Cholesky fac-
torization, where L is unit lower triangular and D is diagonal. Then -T , and
since is zero on and below the diagonal hence nilpotent, we have
Then for
The nonzero entries in this sum correspond to the monotonically increasing di-
Our orderings should therefore avoid having
many such dipaths which involve large entries of L, as each one could substantially
increase the magnitude of Z's entries. Thus we want to move the large entries away
from the diagonal, so they cannot appear in many monotonic dipaths. In other words,
after a node has been ordered, we want to order so that its remaining neighbors with
strong L-connections come as late as possible after it.
For the purposes of our ordering heuristic, we want an easy approximation to L
independent of the eventual ordering chosen-something that can capture the order of
magnitude of entries in L but doesn't require us to decide the ordering ahead of time.
Assuming A has an adequately dominant diagonal without too much variation, we
can take the absolute value of the lower triangular part of A, symmetrically rescaled
to have a unit diagonal. (This can be thought of as a scaled Gauss-Seidel approxima-
tion.) Our general heuristic is then to delay strong connections in this approximating
matrix M defined by
An alternative justification of this heuristic, simply in the context of reducing the
magnitude of entries in L, is presented in [10].
Now consider a simple demonstration problem to determine whether this heuristic
could help. The matrix SINGLEANISO comes from a 5-point finite-di#erence
discretization on a regular 31 31 grid of the following PDE:
Here the edges of A (and M) corresponding to the y-direction are 1000 times heavier
than those corresponding to the x-direction. We try comparing two I F fill reducing
orderings. The first ordering ("strong-first") block-orders the grid columns with
ORDERING, ANISOTROPY, AND APPROXIMATE INVERSES 873
Strong-first ordering Weak-first ordering
Fig. 2. The two orderings of SINGLEANISO, depicted on the domain. Lighter shaded boxes
indicate nodes ordered later.
Table
Performance of strong-first ordering versus weak-first ordering for SINGLEANISO.
Ordering
Time to compute
preconditioner
Number of
iterations
Time for
iterations
Strong-first
Weak-first 0.38 25 0.25
nested dissection and then internally orders each block with nested dissection-this
brings the strong connections close to the diagonal. The second ("weak-first") block-
orders the grid rows instead, pushing the strong connections away from the diagonal,
delaying them until the last. These are illustrated in Figure 2, where each square of
the grid is shaded according to its place in the ordering.
Both orderings produce a reasonable I F fill of 103,682, with isomorphic elimination
trees. However, they give very di#erent performance at the first level of fill-see
Table
2. In all respects the weak-first ordering is significantly better than the strong-
first one.
In
Figure
3 we plot the decay of the entries in the inverse factors resulting from
the two orderings and show parts of those factors. The much smaller entries from the
weak-first ordering confirm our heuristic.
5. Weighted orderings. In our experience it appears that I F fill reduction
typically helps to also reduce the magnitude of entries in the inverse factors but blind
as it is to the numerical values in the matrix it can make mistakes such as allowing
strong connections close to the diagonal. In creating algorithms for ordering general
matrices, we thus have tried to simply modify the unweighted algorithms to consider
the numerical values.
Weighted nested dissection (WND). Consider the spectral bipartition al-
gorithm. Finding the Fiedler vector, the eigenvector of the Laplacian of the graph
with second smallest eigenvalue (see [15]), is equivalent to minimizing (over a space
orthogonal to the constant vectors):
Entries sorted by magnitude
Magnitudes
Comparing decay in factors
Strong-first
Weak-first
Close-up of strong-first
Close-up of weak-first
Fig. 3. Comparison of the magnitude of entries in inverse factors for the di#erent orderings of
SINGLEANISO. The close-up images of the actual factors are shaded according to the magnitude
of the nonzeros-darker means bigger.
(i,j) is an edge
We then make the bipartition depending on which side of the median each entry
lies. Notice that the closer together two entries are in value-i.e., the smaller
is-the more likely those nodes will be ordered on the same side of the cut. We would
like weakly connected nodes (where M ij is small) to be in the same part and the
strong connections to be in the edge cut, so we try minimizing the following weighted
quadratic sum:
(i,j) is an edge
where M is the scaled matrix mentioned above. This corresponds to finding the
eigenvector with second smallest eigenvalue of the weighted Laplacian matrix for the
graph defined by
(i, is an edge if and only if M ij #= 0, weight(i,
Thus we modify ND simply by changing the Laplacian used in the bipartition
step to this weighted Laplacian. Fortunately, our multilevel approach with heavy-
edge matchings typically will eliminate the largest o#-diagonal entries, as well as
substantially decreasing the size of the eigenproblem, making it easy to solve, so our
WND is very reasonable to compute.
OutIn. Our other weighted orderings are based on an intuition from finite-
di#erence matrices. We expect that the nodes most involved in long, heavy paths are
those near the weighted center of the graph (those with minimum weighted eccen-
tricity). The nodes least involved in paths are intuitively the ones on the weighted
ORDERING, ANISOTROPY, AND APPROXIMATE INVERSES 875
periphery of the graph. Thus OutIn orders the periphery first and proceeds to an
approximate weighted center ("from the outside to the inside"). To e#ciently find an
approximate weighted center we use an iterative algorithm.
Algorithm 1 (approximate weighted center).
. Do
- Calculate the M-weighted shortest paths and M-weighted distances to
other nodes from v i (where the distance is the minimum sum of weights,
given by M , along a connecting path).
- Select a node e i of maximum distance from v i .
Travel r of the way along a shortest path from v i to e i , saving the resulting
node as v i+1 .
. End loop when v as the approximate center.
Then our OutIn ordering is the following.
Algorithm 2 (OutIn).
. Compute
. Get an approximate weighted center c for M .
. Calculate the distances and shortest paths from all other nodes in M to c.
. Return the nodes in sorted order, with most distant first and c last.
Despite our earlier remark that envelope orderings might not be useful, the weight
information actually lets OutIn perform significantly better than the natural ordering
by reducing the size of the nonzeros in the factors if not the number of them. However,
why not combine OutIn with an I F fill reducing ordering to try to get the best of
both worlds? We thus test OutIn as a preprocessing stage before applying red-black,
minimum degree, or MIP. We note that the use of hash tables and other methods to
accelerate the latter two means that it's not true that the precedence set by OutIn
will always be followed in breaking ties for minimum penalty.
As an aside, we also considered modifying minimum degree and MIP with tie-breaking
directly based on the weight of a candidate node's connections to previously
selected nodes. Weighted tie-breaking (with RCM) has proved useful before in the
context of ILU [11]. However, for the significant extra cost incurred by this tie-
breaking, this achieved little here-it appears that a more global view of weights is
required when doing approximate inverses.
Before proceeding to our large test set, we verify that these orderings are behaving
as expected with another demonstration matrix. ANISO is a similar problem to
SINGLEANISO but with four abutting regions of anisotropy with di#ering directions
[12]-see
Figure
4 for a diagram showing the directions in the domain. As shown in
Table
3, the results for WND over ND and OutIn/MIP over MIP didn't change, but
there was a significant improvement in the other orderings.
6. Testing weighted results. We repeated the tests for our weighted orderings,
with results given in Tables 4-7. For unsymmetric matrices, we used |A|
define M in WND and OutIn, avoiding the issue of directed edges as with unweighted
orderings. In each box of the tables, the lower numbers correspond to the weighted
orderings; we have grouped them with the corresponding unweighted orderings for
comparison.
876 ROBERT BRIDSON AND WEI-PAI TANG
Fig. 4. Schematic showing the domain of ANISO. The arrows indicate the direction of the
strong connections.
Table
Performance of weighted orderings versus unweighted orderings for ANISO.
Ordering I F fill
Time to compute
preconditioner
Number of
iterations
Time for
iterations
Given 462 0.41 118 1.2
OutIn 266 0.31 77 0.76
Red-black 239 0.28 107 1.13
OutIn/RB 208 0.28 61 0.61
OutIn/AMD 69 0.15 48 0.5
WND 84 0.15
Only ND su#ered in preconditioner computing time-our spectral weighting appears
to be too severe, creating too much I F fill. However, it is important to note
that the increase in time is much less than that suggested by I F fill-indeed, although
WND gave several times more I F fill for ADD32, MEMPLUS, SAYLR4, SHERMAN4,
and WANG1, it actually allowed for slightly faster preconditioner computation. This
verifies the merit of our heuristic. Both the natural ordering and red-black benefited
substantially from OutIn in terms of preconditioner computation, and AMDBAR and
MIP didn't seem to be a#ected very much-this could quite well be a result of the
data structure algorithms which do not necessarily preserve the initial precedence set
by OutIn.
In terms of improving convergence, we didn't fix the problems with PORES2,
SHERMAN2, and WATSON5. These matrices have very weak diagonals anyhow, so
our heuristics probably don't apply. OutIn and OutIn/RB are a definite improvement
on the natural ordering and red-black, apart from on BCSSTK14 and WATSON4. The
e#ect of OutIn on AMDBAR and MIP is not clear; usually there's little e#ect, and
on some matrices (e.g., ALE3D and SAYLR4) it has an opposite e#ect on the two.
WND shows more promise, improving convergence over ND considerably for ALE3D,
BCSSTK14, ORSIRR1, ORSREG1, and SAYLR4. Its much poorer I F fill reduction
(generally by a factor of 4) gave it problems on a few matrices though.
ORDERING, ANISOTROPY, AND APPROXIMATE INVERSES 877
7. Conclusions. It is clear that I F fill reduction is crucial to the speed of preconditioner
computation, often making an order of magnitude di#erence. We also saw
that the I F fill of a matrix can be computed very cheaply and gives a good indication
of the preconditioner computation time, for unweighted orderings at least.
Reducing I F fill typically also gives a more e#ective preconditioner, accelerating
convergence-not only are the number of nonzeros in the true inverse factors de-
creased, but the magnitude of the portion that is dropped by AINV is reduced too.
However, although ND gave the best I F fill reduction, MIP gave the best acceleration
so care must be taken. It would be interesting to determine why this is so. Probably
several steps of ND followed by MIP or a minimum degree variant on the subgraphs
will prove to be the most practical ordering.
Anisotropy can have a significant e#ect on performance, both in terms of preconditioner
computing time and solution time. Our WND algorithm shows the most
promise for a high-performance algorithm that can exploit anisotropy, perhaps after
some tuning of the weights in the Laplacian matrix used. Robustness is still an issue;
we believe a more sophisticated weighting heuristic is necessary for further progress.
Appendix
. Testing data. Our test platform was a 180MHz Pentium Pro running
Windows/NT. We used MATLAB 5.1, with the algorithm for AINV written as a
MEX extension in C. Our AINV algorithm was a left-looking, column-by-column ver-
sion, with o#-diagonal entries dropped when their magnitude is below a user-supplied
tolerance and with the entries of D shifted to 10 -3 max |A| when their computed
magnitude We also make crucial use of the elimination tree;
in making a column conjugate with the previous columns, we only consider its descendants
in the elimination tree (the only columns that could possibly contribute
anything). This accelerates AINV considerably for low I F fill orderings-e.g., SHER-
MAN3 with AMDBAR ordering and a drop tolerance of 0.1 is accelerated by a factor
of four! An upcoming paper [7] will explore this more thoroughly.
To compare the orderings we selected several matrices, mostly from the Harwell-Boeing
collection. First we found the amount of true I F fill caused by each ordering,
given in Table 4. We then determined drop-tolerances for AINV to produce preconditioners
with approximately N and 2N nonzeros, where N is the number of nonzeros
in the given matrix. For each matrix, ordering, and fill level we attempted to solve
using BiCGStab (CG for SPD matrices), where b was chosen so that the
correct x is the vector of all 1s. Tables 5, 6, and 7 give the CPU time taken for
preconditioner computation, the iterations required to reduce the residual norm by
a factor of 10 -9 , and the CPU time taken by the iterations. We halted after 1800
iterations; the daggers in Tables 6 and 7 indicate no convergence at that point.
In each box of the tables, the upper line corresponds to the unweighted ordering
and the lower line its weighted counterpart. In Tables 5, 6, and 7 the numbers on the
left of the box correspond to the low-fill tests and those on the right to the high-fill
tests.
To highlight the winning ordering for each matrix, we have put the best numbers
in underlined boldface.
878 ROBERT BRIDSON AND WEI-PAI TANG
Table
Comparison of I F fill caused by di#erent orderings. Nonzero counts are given in thousands
of nonzeros. In each box the upper number corresponds to the unweighted ordering, and the lower
number corresponds to its weighted counterpart.
Given Red-black AMDBAR ND MIP
Name n NNZ OutIn OutIn/* OutIn/* WND OutIn/*
1486 1506 419 1526 709
4468 4435 1615 113777 1736
28974 19841 6815 26054 12750
128 91 37 114 59
SHERMAN5 3312 21 1340 1122 414 334 465
432 430 54 833 52
ORDERING, ANISOTROPY, AND APPROXIMATE INVERSES 879
Table
Comparison of CPU time for preconditioner computation. In each box the upper numbers
correspond to the unweighted ordering, and the lower numbers correspond to its weighted counterpart.
The numbers on the left refer to the low-fill test, and the numbers on the right refer to the high-fill
test.
Given Red-black AMDBAR ND MIP
Name OutIn OutIn/* OutIn/* WND OutIn/*
1.7 1.6 2.0 1.8 1.3 1.5 2.5 3.2 1.3 1.5
ADD32 33.9 45.6 32.7 43.5 3.4 3.6 3.2 3.4 3.7 3.4
7.1 8.0 6.1 6.3 3.5 3.4 2.9 3.0 3.7 3.4
ALE3D 14.3 26.6 13.0 24.1 9.4 16.1 6.4 11.0 15.9 28.3
BCSSTK14 6.2 11.1 6.0 10.7 2.3 3.7 2.0 3.3 3.0 4.7
5.6 8.4 5.6 8.5 2.2 3.6 3.6 5.6 3.5 5.7
70.6 76.6 81.0 84.5 61.6 61.9 54.7 55.8 65.1 58.2
NASA1824 4.6 7.8 4.0 7.1 1.3 2.1 1.5 2.3 1.8 2.8
3.9 6.0 3.9 6.1 1.4 2.2 1.9 3.0 1.7 2.8
8.4 15.2 8.5 14.7 3.0 5.0 2.4 3.8 3.5 5.7
8.4 14.5 8.5 14.7 3.1 5.1 2.8 4.5 4.2 6.8
1.3 1.7 1.1 1.8 0.8 1.1 1.0 1.4 0.9 1.3
ORSREG1 7.6 10.7 4.7 6.4 2.8 3.9 2.7 3.8 3.3 4.6
6.4 8.8 5.3 7.1 2.9 3.6 4.6 6.4 4.4 6.1
PORES2 2.8 4.0 2.5 3.8 1.3 1.9 0.7 1.1 1.3 1.8
1.7 2.5 2.0 2.9 1.0 1.5 1.2 1.8 1.6 2.4
SAYLR4 8.5 12.1 5.9 7.1 2.8 3.8 2.3 2.8 4.1 5.0
4.6 6.1 4.1 4.8 2.9 3.5 2.3 2.6 3.3 3.9
SHERMAN2 5.8 11.1 5.4 9.9 3.0 5.0 3.0 5.0 3.2 5.3
5.2 8.9 5.0 8.8 3.2 5.5 4.0 6.8 3.0 5.1
SHERMAN5 12.1 21.2 9.5 16.4 4.1 6.6 3.6 5.7 4.1 6.3
7.6 11.7 7.3 11.6 4.3 6.7 4.8 7.3 4.1 6.3
SWANG1 18.7 29.4 18.2 28.0 4.2 5.8 3.2 4.4 6.4 9.2
18.2 29.8 14.8 23.3 4.1 4.2 3.4 4.6 5.9 8.5
WANG1 19.6 31.3 10.8 16.4 6.2 8.8 6.2 8.9 9.2 13.2
16.2 24.2 12.1 18.0 6.4 9.2 6.2 8.8 9.2 13.3
2.1 2.9 2.0 2.8 0.7 0.8 1.0 1.2 0.8 0.9
7.2 11.3 3.8 6.0 2.0 3.0 1.8 2.5 2.7 4.0
4.8 7.7 3.8 5.8 2.2 3.2 1.8 2.5 2.4 3.6
7.4 12.0 4.2 6.6 2.1 3.0 1.8 2.6 2.8 3.9
5.1 7.5 3.7 5.5 2.7 3.9 1.9 2.6 2.5 3.
Table
Comparison of iterations required to reduce residual norm by 10 -9 . In each box the upper
numbers correspond to the unweighted ordering, and the lower numbers correspond to its weighted
counterpart. The numbers on the left refer to the low-fill test, and the numbers on the right refer to
the high-fill test.
Given Red-black AMDBAR ND MIP
Name OutIn OutIn/* OutIn/* WND OutIn/*
28 21 34
19 9
28 37 21 33
43 26 28 42 19
PORES3 87 28 90 23 81 31 36 21
37 26 36 23 36 22 35 23 36 22
28 19 28 21
28 19
43
28 6 28 4
ORDERING, ANISOTROPY, AND APPROXIMATE INVERSES 881
Table
Comparison of time taken for iterations. In each box the upper numbers correspond to the
unweighted ordering, and the lower numbers correspond to its weighted counterpart. The numbers
on the left refer to the low-fill test, and the numbers on the right refer to the high-fill test.
Given Red-black AMDBAR ND MIP
Name OutIn OutIn/* OutIn/* WND OutIn/*
ALE3D 18.4 11.5 10.2 9.3 5.5 8.6 10.0 8.7 4.9 5.0
4.9 5.8 3.6 5.4 4.2 4.2 5.0 6.9 6.1 7.7
9.3 9.0 9.2 8.9 8.6 7.5 13.0 15.4 7.9 7.1
13.4 16.0 13.2 17.0 8.3 7.7 10.3 10.5 7.7 7.5
14.7 11.4 10.4 6.8 22.0 5.7 15.0 4.3 11.1 4.5
NASA1824 71.4 73.5 68.2 73.5 63.3 52.1 53.2 50.4 65.3 52.0
NASA2146 10.3 10.2 11.6 12.8 7.8 6.3 7.9 5.6 7.2 6.9
11.6 7.4 10.9 7.4 9.2 6.7 9.4 6.0 7.9 6.1
ORSREG1 2.4 1.9 2.3 1.4 2.3 1.3 2.2 2.0 2.4 1.4
2.2 1.5 2.1 1.2 2.2 1.5 2.0 1.3 2.3 1.6
PORES3 1.1 0.5 1.2 0.4 1.1 0.5 0.5 0.3 0.4 0.3
4.3 4.6 4.0 3.8
4.8 5.0 4.0 4.4 4.0 3.9 4.3 3.7 4.1 3.8
SAYLR4 71.8 4.6 3.9 6.4 4.7 4.6 8.9 4.3 8.3 4.3
65.6 4.8 6.8 4.6 6.2 4.4 4.8 4.3 4.9 4.3
SHERMAN5 2.8 2.5 2.5 2.4 2.6 2.0 2.3 2.0 2.4 2.2
2.9 2.3 2.9 2.3 2.5 2.0 2.7 2.1 2.4 2.0
3.2 3.4 2.9 2.6 3.1 2.7 2.9 2.9 3.0 3.0
3.2 3.2 3.1 2.8 3.0 2.6 3.0 2.8 3.0 2.8
2.3 0.3 0.4 0.3 0.6 0.3 0.3 0.
--R
An approximate minimum degree ordering algorithm
A sparse approximate inverse preconditioner for the conjugate gradient method
A sparse approximate inverse preconditioner for nonsymmetric linear systems
Numerical experiments with two approximate inverse preconditioners
Orderings for factorized sparse approximate inverse preconditioners
Refined algorithms for a factored approximate inverse.
Approximate inverse techniques for general sparse matrices
Approximate inverse preconditioners via sparse-sparse iterations
Spectral ordering techniques for incomplete LU preconditioners for CG methods
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
Towards a cost-e#ective ILU preconditioner with high level fill
Decay rates for inverses of band matrices
An algebraic approach to connectivity of graphs
Improving the performance of parallel factorized sparse approximate inverse precon- ditioners
Computer Solution of Large Sparse Positive Definite Systems
The evolution of the minimum degree ordering algorithm
Predicting structure in sparse matrix computations
Parallel preconditioning with sparse approximate inverses
A fast and high quality multilevel scheme for partitioning irregular graphs
Factorized sparse approximate inverse preconditionings I.
The role of elimination trees in sparse factorization
Toward an e
--TR
--CTR
E. Flrez , M. D. Garca , L. Gonzlez , G. Montero, The effect of orderings on sparse approximate inverse preconditioners for non-symmetric problems, Advances in Engineering Software, v.33 n.7-10, p.611-619, 29 November 2002
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | preconditioner;conjugate gradient-type methods;anisotropy;approximate inverse;ordering methods |
339937 | A Parallel Algorithm for the Reduction to Tridiagonal Form for Eigendecomposition. | One-sided orthogonal transformations which orthogonalize columns of a matrix are related to methods for finding singular values. They have the advantages of lending themselves to parallel and vector implementations and simplifying access to the data by not requiring access to both rows and columns. They can be used to find eigenvalues when the matrix is given in factored form. Here, a finite sequence of transformations leading to a partial orthogonalization of the columns is described. This permits a tridiagonal matrix whose eigenvalues are the squared singular values to be derived. The implementation on the Fujitsu VPP series is discussed and some timing results are presented. | Introduction
Symmetric eigenvalue problems appear in many applications ranging from computational
chemistry to structural engineering. Algorithms for symmetric eigenvalue
problems have been extensively discussed in the literature [11, 9] and implemented
in various software packages (e.g. LAPACK [1]). With the broader introduction of
parallel computers in scientific computing new parallel algorithms have been suggested
[7, 2]. In the following another new parallel algorithm is suggested which is
particularly well adapted to vector parallel computers and has low operation counts.
Eigenvalue problems can only be solved by iterative algorithms in general as they
are in an algebraic sense equivalent to finding the n zeros of a polynomial. There are,
however, two main classes of methods to solve the symmetric eigenvalue problem.
The first class only requires matrix vector products and does not inspect nor alter
the matrix elements of the matrix. This class includes the Lanczos method [9] and
Date: November 1995.
1991 Mathematics Subject Classification. 65Y05,65F30.
Key words and phrases. Parallel Computing, Reduction Algorithms, One-sided Reductions.
Computer Sciences Laboratory, Australian National University, Canberra ACT 0200, Australia.
ANU Supercomputer Facility, Australian National University.
Centre for Mathematics and its Applications, Australian National University.
M. HEGLAND , M. H. KAHN , M. R. OSBORNE
has particular advantages for sparse matrices. However, in general, the Lanczos
method has difficulties in finding all the eigenvalues and eigenvectors.
A second class of methods iteratively applies similarity transforms
with A A to the matrix to get a sequence of orthogonally similar matrices which
converge to a diagonal matrix. This second class of methods consists mainly of two
subclasses. The first subclass uses Givens matrices for the similarity transforms and
is Jacobi's method. It has been successfully implemented in parallel [2], [12]. A disadvantage
of this method is its high operation count. A second subclass of methods
first reduces the matrix with an orthogonal similarity transform to tridiagonal form
and then uses special methods for symmetric tridiagonal matrices. Both parts of
these algorithms pose major challenges to parallel implementation. For the second
stage of tridiagonal eigenvalue problem solvers the most popular methods include
divide and conquer [7] and multisectioning [9]. Here the reduction to tridiagonal
form is discussed. Earlier algorithms use block matrix algorithms, see [6, 5, 3].
However, these methods have not achieved optimal performance. One problem is
that similarity transforms require multiplications from both sides.
It was seen [12] that the Jacobi method based on one-sided transformations
allows better vectorization and requires less communication than the original Jacobi
algorithms. Assuming that A is positive semi-definite the intermediate matrices B i
can be defined as factors of the A i , that is,
As will be seen in the following, the one-sided idea can also be used for the reduction
algorithm.
The algorithm will form part of the subroutine library for a distributed memory
computer, the Fujitsu VPP 500. Often the application of subroutines from libraries
allow the user little freedom in the choice of the distribution of the data to the
local memories of the processors. The one-sided algorithms allow a large range of
distributions and perform equally well on all of them.
In the next section the one and two-sided reduction to tridiagonal form is de-
scribed. Section 3 reinterprets the reduction as an orthogonalisation procedure similar
to the Gram-Schmidt procedure. This reinterpretation is used to introduce the
new one-sided reduction algorithm. In Section 4 the computation of eigenvectors is
discussed and Section 5 contains timings and comparisons with other algorithms.
REDUCTION TO TRIDIAGONAL FORM 3
2. Reduction to tridiagonal form
A class of methods to solve the eigenvalue problem for symmetric matrices A 2
R n\Thetan first reduces them to tridiagonal form and in a second step solves the eigenvalue
problem for this tridiagonal matrix. The problem of finding the eigendecomposition
of the tridiagonal matrix will not be discussed here but that of accumulating
transformations to be used for finding the eigenvectors of the symmetric matrix is
The reduction to tridiagonal form produces a factorization
where Q is orthogonal and T is symmetric and tridiagonal. If the offdiagonal elements
of T are (nonzero) positive the factorization is uniquely determined by the
first column of Q [9]. The proof of this fact leads directly to the Lanczos algo-
rithm. While the Lanczos method has advantages especially for sparse matrices,
methods based on sequences of simple orthogonal similarity transforms [9, p.118]
are preferable for dense matrices.
2.1. Householder's reduction. A method attributed to Householder uses Householder
transformations or reflections
with In the following let ff ij denote the elements of A. That is,
matrix H(w)AH(w) has zeros in rows 3 to n in the first column and in columns 3 to
n in the first row. The computation of H(w), or equivalently of w; fl and fi 1 requires
multiplications and n additions (up to O(1)).
Using the matrix vector product v
the application of H(w) is a rank two update as
4 M. HEGLAND , M. H. KAHN , M. R. OSBORNE
where
This takes n 2 +n multiplications and additions for the computation of p, 2n+2
multiplications and additions for the computation of q and n 2 multiplications
and 2n 2 additions for the rank two update. This gives a total of 2n 2
multiplications and 3n 2
In a second step, a v is found such that H(v)H(w)AH(w)H(v) has additional
zeros in columns 4 to n in the second row and in rows 4 to n in the second column
and the procedure is repeated until the remaining matrix is tridiagonal. The sizes
of the remaining submatrices decrease and at step n \Gamma k a matrix of size k has to be
processed requiring multiplications and 3k 2 additions. This
gives a total of
multiplications and
additions. The tridiagonal matrix is not uniquely determined by the problem. For
example, different starting vectors for the Lanczos procedure lead to different tridiagonal
matrices. Also, different matrices can be obtained if different arithmetic
precision is used [9, p.123/124]. However, despite this apparent lack of definition,
the eigenvalues and eigenvectors of the original problem can still be determined with
an error proportional to machine precision.
In summary, the sequential Householder tridiagonalization algorithm is as follows:
For
Calculate (fl; w) from A(k
End
For vector and parallel processors the Householder algorithm has some disad-
vantages. Firstly, as the iterations progress, the length of the vectors used in the
calculations decreases but for efficient use of a vector processor we prefer long vector
lengths. Secondly, in a parallel environment, if the input matrix A is partitioned in
a banded manner across a one-dimensional array of processors, the algorithm will
REDUCTION TO TRIDIAGONAL FORM 5
be severely load imbalanced. To avoid this various authors have suggested using
cyclic [5] or torus-wrap mappings of the data [10, 3]. Also, for a parallel imple-
mentation, the rank two update of A requires copies of both vectors w and q on all
processors which leads to a heavy communication load.
2.2. One-sided reduction. A one-sided algorithm is developed to overcome the
difficulties inherent in a parallel version of the sequential Householder algorithm. In
addition, it is expected that the one-sided algorithm generates less fill-in for sparse
matrices than the two-sided algorithms if the matrix is given in finite element form.
Real symmetric matrices A are either given by or can be factored as
so A can be represented in factored form by D and B. If Cholesky factorization
is to be used then the spectrum of A might have to be shifted so that A \Gamma -I is
positive definite. The parameter - can be chosen using the Gershgorin shift. If
exact arithmetic is used for the reduction, the eigenvectors of A \Gamma -I are equal to
the eigenvectors of A and the eigenvalues are shifted by -. However, the precision
of the computations will be affected by the introduction of the shift.
A Householder similarity transform of A is done by applying H(w) to B as
For this, a rank one modification must be computed
with flBw. The computation of p requires multiplications and
additions, the rank one modification n 2 multiplications and n 2 additions giving
together 2n 2 +n multiplications and 2n additions. In contrast to the two-sided
algorithm, the number of additions is approximately the same as the number of
multiplications in this algorithm. This is advantageous for architectures which can
do addition and multiplication in parallel as it means better load balancing.
The computation of w is more costly for this method than for the original Householder
method. As the Householder vector w is computed from the first column a 1
of A, this column has to be reconstructed first from the factored representation by
This requires n 2 +n multiplications and additions. Thus the computation of
the Householder matrix H(v) requires (up to O(1) terms) multiplications
and n 2 additions.
6 M. HEGLAND , M. H. KAHN , M. R. OSBORNE
Adding these terms up, one reduction step needs multiplications and
additions and so the overall costs of this algorithm are
multiplications and
additions. The total number of operations has increased compared with the original
algorithm. But the time used on a computer which does additions and multiplications
in parallel and at the same speed is the same. If the matrix A is not already
factorized, however, the time to do this would have to be taken into account as well.
In summary, the sequential version of the one-sided algorithm is as follows.
For
Calculate (fl; w) from a k
End
3. The one-sided algorithm as an orthogonalization procedure
In order to develop the parallel version of the algorithm the one-sided reduction is
interpreted as an orthogonalization procedure like the Gram-Schmidt process. Let
b i denote the ith column of B, that is,
Then the matrix A can be interpreted as the Gramian of the b i as follows,
The one-sided tridiagonalization procedure constructs
such that As T
is the Gramian of the c i the tridiagonality is a condition on the orthogonality of
certain c i as
2: (3.4)
REDUCTION TO TRIDIAGONAL FORM 7
In a first step the orthogonality of c established by setting
and
such that
and
Here ffi kj denotes the Kronecker delta. In a subsequent step, linear combinations of
are formed such that c 4 are orthogonal to c 2 . As the c are
orthogonal to c 1 the linear combinations are as well and so the subsequent steps do
not destroy the earlier orthogonality relations. This is the basic observation used in
the proof of this method.
The algorithm for reduction to tridiagonal form is then as follows:
c (1)
for k :=
end for
where the fl (k)
ij are such that the new c (k)
are orthogonal to c (k\Gamma1)
. That is,
and the matrix
i;j=k;:::;n
is orthogonal with
That is, at each iteration k, find an orthogonal transformation of the c (k\Gamma1)
i such
that c (k)
j is orthogonal to c (k\Gamma1)
This is equivalent to making
the offdiagonal elements ff j;k\Gamma1 and ff k\Gamma1;j of A zero for
Proposition 3.1. Let c (k)
j be computed by the previous algorithm and
c (1)
8 M. HEGLAND , M. H. KAHN , M. R. OSBORNE
where c (n)
tridiagonal and there is an orthogonal
matrix Q such that
Proof. The proof is based on the fact mentioned earlier that some orthogonality
relations are invariant. It uses induction. The proof is very similar to the one given
in the next section for the corresponding parallel algorithm.
Remark. The original Householder algorithm can be formulated in a similar way
treating B as an inner product. Coordinate transformations change the matrix until
it is tridiagonal.
3.1. Parallel Algorithm. In the following the single program multiple data model
(SPMD) will be used. The basic assumption is that all the processors are programmed
in the same way although their actions might be slightly different. Thus
an SPMD algorithm is described by the pseudocode denoting what one processor
has to do. The data in the matrix B is distributed to the processors by columns in a
cyclic fashion. This means that processor 1 contains columns
contains and so on where p is the total number of processors. More
formally, processor q contains b i for
pg. In order
to simplify notation let N ng where is the processor
number. Furthermore mod denotes the mod function mapping positive and negative
integers to
c (1)
broadcast c (1)
1 else receive c (1)for k :=
gather c (k)
end for
As before, the coefficients fl (h)
ij and ~
ij are such that the modified columns are
orthogonal to the last unmodified one. Thus
ng " N q ;
and
REDUCTION TO TRIDIAGONAL FORM 9
The coefficients also form orthogonal matrices so it follows that,
and
~
The c (k)
are overwritten with the ~ c (k)
. Note that the last calculations of the c (k)
are
duplicated on all p processors which leaves c (k)
k on all processors ready for the next
step. The only communication required for each iteration is in the gathering of at
most p vectors c (k)
.
Proposition 3.2. Let c (k)
j be computed by the previous algorithm and
c (1)
where c (n)
tridiagonal and there is an orthogonal
matrix Q such that
Proof. Let
c (1)
First show that, for
1. there is an orthogonal matrix Q (k) such that
2. the linear hull of c (k)
n is orthogonal to the linear hull of the c (1)
and
3. T
Proposition 3.2 is a consequence of this, obtained by setting 1. The proof
uses induction over k. The statement is easily seen to be true in the case of
with I, the n dimensional identity matrix. In this case the orthogonality
conditions are empty and the matrix T (1) is a two by two matrix and thus tridiagonal.
The remainder of the proof consists of proving the induction step. Assume that
the three properties are valid for m. Then it has to be shown that it is
also valid for
?From the construction it follows that C for an orthogonal G (m) .
Thus, with Q one retrieves the first property. In particular, the
existence of G (m) follows from the existence of the constructed Householder matrices.
Then the linear hull of c (m+1)
n is a subspace of the linear hull of
c (m)
n and thus orthogonal to c (1)
. Furthermore they have been
constructed to be orthogonal to c (m)
. From this the second property follows.
M. HEGLAND , M. H. KAHN , M. R. OSBORNE
Finally, T which is tridiagonal
and it remains to show that the the m+ 2nd column of T
has zeros in the first m rows. But the first m elements of this column are just
c (m+1)
m. They are zero because of the second property.
This proof can be used for both the sequential and the parallel algorithm.
In practical implementations the orthogonalisation of the c (k)
j is achieved by forming
the products c (k\Gamma1)
which correspond to individual elements in the up-dated
version of the symmetric matrix A. Householder transformations are then
used to zero the relevant off-diagonal elements. At the first stage in each iteration
these transformations are carried out locally and after the gathering step the transformation
on the (at most) p formed elements of A is replicated on all processors.
Although Householder transformations are used here, it would also be possible to
use Givens transformations.
3.2. analysis. When using the one-sided reduction to tridiagonal as one step
in a complete eigensolver there are several possible components to the error. First
there is the Cholesky factorization. However a result in Wilkinson [11] (equation
4.44.3) shows that the quantity extremely small. Thus the error
incurred in working with the Cholesky factor B in the subsequent calculations is
minimal.
In the second step, the tridiagonal matrix C (n\Gamma2) is produced by successive reduction
of orthogonal similarity transforms where C (k) is the transformed
matrix calculated after the k'th stage. The error in the eigenvalues of C (n\Gamma2) is
bounded by the numerical error in the transform [11]. After the k'th step this is
given by
Y
where P i is the exact orthogonal matrix corresponding to the actual computed data
at stage i. A bound on this difference follows from Wilkinson [11] (equation (3.45.3)).
In fact,
Y
ij and t is the word length. So the error introduced by the
reduction to tridiagonal is small.
The final stage is the calculation of the eigenvalues of the tridiagonal matrix.
These are determined to an accuracy which is high relative to the largest element
of the tridiagonal matrix. This applies for example to eigenvalues computed using
REDUCTION TO TRIDIAGONAL FORM 11
the Sturm procedure. But this result does not guarantee high relative accuracy in
the determination of small eigenvalues so, if this is important, the Jacobi method
becomes the method of choice [4].
4. Calculating Eigenvectors
In order to calculate the eigenvectors of the symmetric matrix, the orthogonal
matrix Q defining the reduction is accumulated. This is achieved by starting with
the identity matrix (distributed cyclically over the processors), then updating it by
multiplying it by the same Householder transformations used to update C. Forming
Q explicitly is preferable to storing the details of the transformations and then
applying them to the eigenvectors of the tridiagonal matrix which is the usual procedure
for sequential implementations. The reason for this is that the matrix of Q
is distributed in a way which renders multiplication from the left inefficient. This is
discussed more fully in the following.
4.1. Calculation of Eigenvectors for One-Processor Version. The n \Theta n symmetric
matrix A is reduced to tridiagonal form by a sequence of Householder transformations
represented by the orthogonal n \Theta n matrix Q and
n\Lambdan is tridiagonal. For the one-sided algorithm, we have actually calculated
The eigendecomposition of T is given by
where is a diagonal matrix containing the eigenvalues and V 2 R n\Lambdan is the matrix
of eigenvectors. Combining the above two equations gives
and so the eigenvectors of A are the columns of
The matrix Q obtained during the rediagonalization procedure is represented by
the Householder matrices H
The eigenvector matrix V of the tridiagonal matrix T is represented by its matrix
elements. So in order to get the matrix element representation of U form the product
M. HEGLAND , M. H. KAHN , M. R. OSBORNE
This product can be done in two different ways. The first method multiplies V
with the H k . Formally, define a sequence U such that
U
This method is often used, for example [8],
where it is called backward transformation.
A second method computes the element-wise representation of Q first by the
recursion
specifically computes the matrix-vector product
QV where . This is called forward accumulation in [8].
The difference between these two methods is that the first one applies the Householder
transforms H k from the left and the second applies them from the right. In
addition the second method requires the computation of a product of two matrices
in element form. For the sequential case when using the two-sided Householder re-
duction, the first method is preferred as it avoids matrix multiplication. However,
for the multi-processor version, multiplication from the left by the Householder
transformation requires extra communication when the columns of the matrix of
eigenvectors V are distributed over the processors. In fact, one purpose of the one-sided
reduction is to avoid this communication in the reduction to tridiagonal form.
There is certainly communication required for the matrix multiplication QV but
this is of fewer large blocks of the matrix so will be less demanding.
4.2. Multi-processor Version. In the multi-processor case there is an added complication
in that B is assumed to be cyclically distributed (although it is not explicitly
laid out as such). So use
~
where P is the permutation that transforms B to cyclic layout.
The matrix Q is a product of 2 \Theta (n \Gamma 2) matrices formed from the Householder
transformations.
Here, H (1)
j refers to the transformations carried out locally at each step of the reduction
and H (2)
j refers to the transformation using one column of B from each processor
REDUCTION TO TRIDIAGONAL FORM 13
which is carried out redundantly on all processors. To find Q start with an identity
matrix distributed cyclically across the processors to match the implicit layout
of B. This matrix is then updated by transforming it with the same Householder
transformations that are used to update B to obtain the tridiagonal matrix.
The matrix of eigenvectors V of the tridiagonal matrix is obtained in block layout
whilst the matrix Q is in cyclic layout. To find the eigenvectors of the original
symmetric matrix this must be taken into account so instead of
we need QPV . The cyclic ordering must then be reversed and finally the eigenvectors
are given by
Pre-multiplication by the permutation P involves a re-ordering of the coefficients of
the eigenvectors which is carried out locally on the processor and does not involve
any communication.
5. Timings
The parallel one-sided reduction to tridiagonal has been implemented and tested
on the Fujitsu VPP500. The VPP500 is a parallel supercomputer consisting of vector
processors connected by a full crossbar network. The theoretical peak performance
of each PE is 1.6 Gflops and the maximum size of memory for each processor is
Gbyte. The VPP500 is scalable from 4 to 222 processing elements but access
was available only to a 16 processor machine. Each processor can perform send
and receive operations simultaneously through the crossbar network at a peak data
transfer rate of 400 Mbytes/s each.
The one-sided algorithm is particularly suited to the architecture of the VPP500
because the calculation of the elements of the updated matrix A from the current
version of C are vectorisable with loops of length n, the size of the input matrix.
In the conventional two-sided Householder reduction the vector length decreases at
each iteration.
Table
shows some timings and speeds obtained on up to a 16 processor VPP500.
The two times and speed given are, first, for the reduction without accumulating
the transformations for later eigenvectors calculations and, second, for the reduction
with accumulation. The speed is given in Gflops. The code was written in the
parallel language VPP-Fortran which is basically FORTRAN77 with added compiler
directives to achieve parallel constructs such as data layout, interprocessor
communication and so on.
?From these performance figures it appears that the algorithm is scalable in so far
as its performance is maintained as the size of the problem is increased along with the
14 M. HEGLAND , M. H. KAHN , M. R. OSBORNE
1024 3.8 .846 5.3 1.018
2048 26 1.000 37 1.164
3072 28 3.135 43 3.371
Table
1. Timings and Speed for Matrices of Increasing Size
number of processors. The one-sided reduction to tridiagonal of a matrix of size 2048
using 2 processors achieves nearly 70% of peak performance and approximately 60%
for a matrix of size 4096 using 4 processors. A formal analysis of the communication
required by the one-sided algorithm gives the communication volume proportional
to is the number of processors. As the computational load is
proportional to n 3 , isoefficiency is obtained when p(p \Gamma 1)n 2 =n 3 is constant, that is,
The scalability of the algorithm is evident if speeds for matrices of size n
on p processors are compared with those of matrices of size 4n using 2p processors.
For example, compare the speeds for
the doubling of Gflop rate.
It is interesting to compare these performance figures with other published results
for alternative parallel two-sided Householder reductions to tridiagonal. The comparisons
can only be general as the algorithms and machine architectures are very
different. The most straightforward comparison is time taken to reduce a matrix of
fixed size measured on machines of similar peak Gflop rate. In practice, the first step
of the Cholesky factorization adds an overhead of about one tenth of the time taken
for the one-sided reduction to tridiagonal. All accessible published results refer to
algorithms implemented on Intel machines.
REDUCTION TO TRIDIAGONAL FORM 15
Dongarra and van de Geijn [5] give times for a parallel reduction to tridiagonal
using panel wrapped storage on 128 nodes of a 520 node Intel Touchstone Delta.
Equating peak Mflop rates suggests that this is comparable to a 6 processor VPP500.
For a matrix of size 4000 their implementation on the Intel took twice as long as
the one-sided reduction of the same size matrix on a 4 processor VPP500.
The ScaLAPACK implementation of a parallel reduction to tridiagonal is given
by Choi, Dongarra and Walker [3]. Extrapolating from their graphs of Gflop rates it
can be seen that their times taken for various sized problems are two or three times
that taken by the one-sided algorithm on the same size problems on a VPP500 of
comparable peak performance.
Smith, Hendrickson and Jessup [10] use a square torus-wrap mapping of matrix
elements to processors and tested their code on an Intel machine corresponding to
a 12 processor VPP500. Their two-sided algorithm can be inferred to have taken
about the same time as a slightly larger problem on a 16 processor VPP500 using
the one-sided algorithm. The Smith et al algorithm is more sophisticated than the
other two-sided algorithms as it uses the torus-wrap mapping and Level 3 BLAS.
6. Conclusion
A new algorithm for reduction of a symmetric matrix to tridiagonal as the first step
in finding the eigendecomposition has been developed. Starting with the Cholesky
factorization of the symmetric matrix, orthogonal transformations based on Householder
reductions are applied to the factor matrix until the tridiagonal form is
reached. This is referred to as a one-sided reduction and leads to the updating
of the factor matrix at each iteration being rank one rather than rank two as in
the conventional Householder reduction to tridiagonal. Transformations can also be
accumulated to allow for calculation of eigenvectors. This algorithm is suited to
parallel/vector architectures such as the Fujitsu VPP500 where it has been shown
to perform well. In a complete calculation of the eigenvalues and eigenvectors of
a symmetric matrix the extra time for the Cholesky factorization is observed to
be about one tenth of that required for the reduction to tridiagonal so it is not a
significant overhead.
7.
Acknowledgement
The authors would like to thank Prof. R. Brent for his helpful comments and Mr.
Makato Nakanishi of Fujitsu Japan for help with access to the VPP500.
M. HEGLAND , M. H. KAHN , M. R. OSBORNE
--R
The solution of singular-value and symmetric eigenvalue problems on multiprocessor arrays
The design of a parallel
Jacobi's method is more accurate than QR
Reduction to condensed form for the eigenvalue problem on distributed memory architectures
Block reduction of matrices to condensed forms for eigenvalue computations
A fully parallel algorithm for the symmetric eigenvalue problem
Matrix Computations
The symmetric eigenvalue problem
A parallel algorithm for Householder tridiagonal- ization
The algebraic eigenvalue problem
A Parallel Ordering Algorithm for Efficient One-Sided Jacobi SVD Computations
--TR | one-sided reductions;reduction algorithms;parallel computing |
340195 | Spatial Color Indexing and Applications. | We define a new image feature called the color correlogram and use it for image indexing and comparison. This feature distills the spatial correlation of colors and when computed efficiently, turns out to be both effective and inexpensive for content-based image retrieval. The correlogram is robust in tolerating large changes in appearance and shape caused by changes in viewing position, camera zoom, etc. Experimental evidence shows that this new feature outperforms not only the traditional color histogram method but also the recently proposed histogram refinement methods for image indexing/retrieval. We also provide a technique to cut down the storage requirement of the correlogram so that it is the same as that of histograms, with only negligible performance penalty compared to the original correlogram.We also suggest the use of color correlogram as a generic indexing tool to tackle various problems arising from image retrieval and video browsing. We adapt the correlogram to handle the problems of image subregion querying, object localization, object tracking, and cut detection. Experimental results again suggest that the color correlogram is more effective than the histogram for these applications, with insignificant additional storage or processing cost. | Introduction
In recent times, the availability of image and video
resources on the World-Wide Web has increased
tremendously. This has created a demand for effective
and flexible techniques for automatic image
retrieval and video browsing [4, 8, 9, 15, 30, 31, 33,
40]. Users need high-quality image retrieval (IR)
systems in order to find useful images from the
masses of digital image data available electroni-
cally. In a typical IR system, a user poses a query
by providing an existing image (or creating one by
drawing), and the system retrieves other "similar"
images from the image database. Content-based
video browsing tools also provide users with similar
capabilities - a user provides an interesting
frame as a query, and the system retrieves other
similar frames from a video sequence.
Besides the basic image retrieval and video processing
tasks, several related problems also need to
be addressed. While most IR systems retrieve images
based on overall image comparison, users are
typically interested in finding objects [7, 9, 6]. In
this case, the user specifies an "interesting" sub-region
(usually an interesting object) of an image
as a query. The system should then retrieve images
containing this subregion (according to human
perception) or object from a database. This
task, called image subregion querying, is made
challenging by the wide variety of effects (such as
different viewing positions, camera noise and vari-
ation, object occlusion, etc.) that cause the same
object to have drastically different appearances in
different images.
The system should also be able to solve the
localization problem (also called the recognition
problem), i.e., it should find the location of the
object in an image. The lack of an effective and
efficient image segmentation process for large, heterogeneous
image databases implies that objects
have to be located in unsegmented images, making
the localization problem more difficult.
Similar demands arise in the context of content-based
video browsing. A primary task in video
processing is cut detection, which segments a video
into different camera shots and helps to extract
key frames for video parsing and querying. A flexible
tool for browsing video databases should also
provide users with the capability to pose object-level
queries that have semantic content, such as
"track this person in a sequence of video". To
handle such queries, the system has to find which
frames contain the specified object or person, and
has to locate the object in those frames.
The various tasks described above - image re-
trieval, image subregion querying, object localization
and cut detection - become especially challenging
when the image database is gigantic. For
example, the collection of images available on the
Internet is huge and unorganized. The image data
is arbitrary, unstructured, and unconstrained; and
the processing has to be done in real-time for retrieval
purposes. For these reasons, traditional
(and often slow) computer vision techniques like
object recognition and image segmentation may
not be directly applicable to these tasks and new
approaches to these problems are required.
Consider first the basic problem of content-based
image retrieval. This problem has been
widely studied and several IR systems have been
built [8, 30, 33, 4, 31]. Most of these IR systems
adopt the following two-step approach to search
image databases [42]: (i) (indexing) for each image
in a database, a feature vector capturing certain
essential properties of the image is computed
and stored in a featurebase, and (ii) (searching)
given a query image, its feature vector is com-
puted, compared to the feature vectors in the fea-
turebase, and images most similar to the query
image are returned to the user. An overview of
such systems can be found in [3].
For a retrieval system to be successful, the feature
vector f(I) for an image I should have the
following qualities: (i) should be
large if and only if I and I 0 are not "similar",
(ii) f(\Delta) should be fast to compute, and (iii) f(I)
should be small in size.
Color histograms are commonly used as feature
vectors for images [43, 8, 30, 33]. It has been
shown that the color histogram is a general and
flexible tool that can be used for the various tasks
outlined above.
1.1. Our Results
In this paper, we propose a new color feature for
image indexing/retrieval called the color correlogram
and show that it can be effectively used in
the various image and video processing tasks described
above. The highlights of this feature are:
(i) it includes the spatial correlation of pairs of
colors, (ii) it describes the global distribution of
local spatial correlations of colors, (iii) it is easy
to compute, and (iv) the size of the feature is
fairly small. Experimental evidence shows that
this new feature (i) outperforms both the traditional
histogram method and the very recently
proposed histogram refinement method for image
indexing/retrieval, and (ii) outperforms the
histogram-based approaches for the other video
browsing tasks listed above.
Informally, a correlogram is a table indexed by
color pairs, where the k-th entry for hi; ji specifies
the probability of finding a pixel of color j
at a distance k from a pixel of color i. Such an
image feature turns out to be robust in tolerating
large changes in appearance of the same scene
caused by changes in viewing positions, changes in
the background scene, partial occlusions, camera
zoom that causes radical changes in shape, etc.
(see
Figure
4). We provide efficient algorithms to
compute the correlogram.
We also investigate a different distance metric
to compare feature vectors. The L 1 distance met-
ric, used commonly to compare vectors, considers
the absolute component-wise differences between
vectors. The relative distance metric we use calculates
3relative differences instead and in most cases
performs better than the absolute metric (the improvement
is significant especially for histogram-based
methods).
We investigate the applicability of correlograms
to image retrieval as well as other tasks like image
subregion querying, object localization, and
cut detection. We propose the correlogram intersection
method for the image subregion querying
problem and show that this approach yields significantly
better results than the histogram intersection
method traditionally used in content-based
image retrieval. The histogram-backprojection
approach used for the localization problem in [43]
has serious drawbacks. We discuss these disadvantages
and introduce the idea of correlogram correc-
tion. We show that it is possible to locate objects
in images more accurately by using local color spatial
information in addition to histogram backpro-
jection. We then use correlograms to compare
video frames and detect cuts by looking for adjacent
frames that are very different. Once again,
we show that using the correlogram as the feature
vector yields superior results compared to using
histograms.
Our preliminary results thus indicate that the
correlogram method is a more accurate and effective
approach to these tasks compared to the color
histogram method. What is more, the computational
cost of the correlogram method is about the
same as that of other simpler approaches, such as
the histogram method.
1.2. Organization
Section 2 gives a brief summary of related work.
In Section 3, we define the color correlogram and
show how to compute it efficiently. Section 4 deals
with the content-based image retrieval problem
and the use of the correlogram for this problem.
Section 5 discusses the use of the correlogram for
image subregion querying. Applications of the
correlogram to video browsing problems are described
in Section 6. Finally, Section 7 concludes
with some remarks and scope for further work.
2. Related Work
Color histograms are commonly used as image
feature vectors [43, 8, 30, 33] and have proved
to be a useful and efficient general tool for various
applications, such as content-based image retrieval
[8, 30, 33], object indexing and localization
[43, 27], and cut detection for video segmentation
[1]. A color histogram describes the global color
distribution in an image. It is easy to compute
and is insensitive to small changes in viewing positions
and partial occlusion. As a feature vector
for image retrieval, it is susceptible to false posi-
tives, however, since it does not include any spatial
information. This problem is especially acute
for large databases, where false positives are more
likely to occur. Moreover, the histogram is not
robust to large appearance changes. For instance,
the pair of images shown in Figure 1 (photographs
of the same scene taken from different viewpoints)
are likely to have quite different histograms 1 .
Color histograms are also used for image subregion
querying and object localization [43]. These
two problems are closely related to object recog-
nition, which has been studied for a long time in
computer vision [37]. Since conventional object
recognition techniques cannot recognize general
objects in general contexts (as in the natural imagery
and real videos), some work has been done
for finding objects from image databases [7, 9].
These techniques, however, are trained for some
specific tasks, such as finding naked people, grouping
trees, etc. Color histograms are also widely
used in video processing. Though there are several
sophisticated techniques for video cut detec-
tion, Boreczky and Rowe [1] report that the simple
color histogram yields consistently good results
compared to five different techniques.
We now briefly discuss some other related work
in the areas of content-based image retrieval, image
subregion querying, object localization, and
cut detection.
2.1. Content-based Image Retrieval
Several recently proposed schemes incorporate
spatial information about colors to improve upon
the histogram method [18, 40, 41, 35, 11, 32, 31].
One common approach is to divide images into
Fig. 1. Two "similar" images with different histograms.
subregions and impose positional constraints on
the image comparison. Another approach is to
augment the histogram with some spatial information
Hsu et al. [18] select two representative colors
signifying the "background" and the principal
"object" in an image. The maximum entropy algorithm
is then used to partition an image into
rectangular regions. Only one selected color dominates
a region. The similarity between two images
is the degree of overlap between regions of the
same color. The method is tested on a small image
database. Unfortunately, this method uses coarse
color segmentation and is susceptible to false positives
Smith and Chang [40] also partition an image,
but select all colors that are "sufficiently" present
in a region. The colors for a region are represented
by a binary color set that is computed using
histogram back-projection [43]. The binary color
sets and their location information constitute the
feature. The absolute spatial position allows the
system to deal with "region" queries.
Stricker and Dimai [41] divide an image into
five fixed overlapping regions and extract the first
three color moments of each region to form a feature
vector for the image. The storage requirements
for this method are low. The use of overlapping
regions makes the feature vectors relatively
insensitive to small rotations or translations.
Pass et al. [32, 31] partition histogram bins by
the spatial coherence of pixels. A pixel is coherent
if it is a part of some "sizable" similar-colored re-
gion, and incoherent otherwise. A color coherence
vector (CCV) represents this classification for each
color in the image. CCVs are fast to compute and
perform much better than histograms. A detailed
comparisons of CCV with the other methods mentioned
above is given in [32].
The notion of CCV was further extended in
[31] where additional feature(s) are used to further
refine the CCV-refined histogram. One such
extension uses the center of the image (the center-
most 75% of the pixels are defined as the "center")
as the additional feature. The enhanced CCV is
called CCV with successive refinement (CCV(s))
and performs better than CCV.
Since the image partitioning approach depends
on pixel position, it is unlikely to tolerate large
image appearance changes. The same problem occurs
in the histogram refinement method, which
depends on local properties to further refine color
buckets in histograms. The correlogram method,
however, takes into account the local spatial correlation
between colors as well as the global distribution
of this spatial correlation and this makes the
correlogram robust to large appearance changes
(see
Figure
4). Moreover, this information is not
a local pixel property that histogram refinement
approaches can capture.
2.2. Other Image/Video Problems
The image subregion querying problem is closely
related to the object recognition problem, which
has been studied for a long time by the computer
vision community [37]. Some of the early work in
object recognition and detection was pioneered by
Marr [26], who suggest that geometric cues such
as edge, surface and depth information be identified
before object recognition is attempted. Most
of such object recognition systems compare the
geometric features of the model with those of an
image using various forms of search (some of which
are computationally quite intensive [24, 13]).
Such geometric information is hard to extract
from an image, however, because geometric and
photometric properties are relatively uncorrelated
[34], and the central tasks involved in this approach
- edge detection and region segmentation
are difficult for unconstrained data in the context
of image retrieval and video browsing.
An alternative approach to model-based recognition
is appearance matching. First, a database
of object images under different view positions
and lighting conditions is constructed. Then,
principal component analysis is used to analyze
only the photometric properties and ignore geometric
properties [29, 23, 34]. This model-based
method is effective only when the principal components
capture the characteristics of the whole
database. For instance, it yields good results on
the Columbia object database in which all images
have a uniform known background. If there is a
large variation in the images in a database, how-
ever, a small set of principal components is unlikely
to do well on the image subregion querying
task. In addition, the learning process requires
homogeneous data and deals poorly with outliers.
Therefore, this approach seems suitable only for
domain-specific applications, but not for image
subregion querying from a large heterogeneous image
database such as the one used in [31, 21].
Since the color information (e.g. histogram) is
very easy to extract from an image, it has been
successfully used for object indexing, detection,
and localization [43, 8, 30, 27, 39, 2, 44, 9]. We
briefly review some of these approaches below.
Swain and Ballard [43] propose histogram
intersection for object identification and histogram
backprojection for object localization.
The technique is computationally easy, does
not require image segmentation or even fore-
ground/background separation, and is insensitive
to small changes in viewing positions, partial oc-
clusion, and object deformation. Histogram back-projection
is a very efficient process for locating
an object in an image. It has been shown that
this algorithm is not only able to locate an object
but also to track a moving object. The advantages
and disadvantages inherent to histograms in
general are discussed in detail in Section 5.
One disadvantage of color histograms is that
they are sensitive to illumination changes. Slater
and Healey [39] propose an algorithm that computes
invariants of local color distribution and
uses these invariants for 3-D object recognition.
Illumination correction and spatial structure comparison
are then used to verify the potential
matches.
Matas et al. [27] propose the color adjacency
graph (CAG) as a representation for multiple-
colored objects. Each node of a CAG represents
a single color component of the image. Edges of
the CAG include information about adjacency of
color components. CAGs improve over histograms
by incorporating coarse color segmentation into
histograms. The set of visible colors and their adjacency
relationship remain stable under changes
of viewpoint and non-rigid transformations. The
recognition and localization problems are solved
by subgraph matching. Their approach yields excellent
results, but the computational cost of sub-graph
matching is fairly high.
Forsyth et al. [9] offer different object models
in order to achieve object recognition under
general contexts. Their focus is on classification
rather than identification. The central process is
based on grouping (i.e., segmentation) and learn-
ing. They fuse different visual cues such as color
and texture for segmentation; texture and geometric
properties for trees; color, texture and specialized
geometric properties for human bodies.
Cut detection, as a first step to video segmentation
and video querying, has been given much attention
[1]. The simple histogram approach gives
reasonably good results on this problem. His-
tograms, however, are not robust to local changes
in images. Dividing an image into several subregions
may not overcome the problem [15] either.
3. The Correlogram
A color correlogram (henceforth correlogram) expresses
how the spatial correlation of color changes
with distance. A color histogram (henceforth his-
captures only the color distribution in an
image and does not include any spatial correlation
information. Thus, the correlogram is one kind of
the spatial extension of the histogram. 2
3.1. Notation
Let I be an n \Theta n image. (For simplicity of expo-
sition, we assume that the image is square.) The
colors in I are quantized into m colors c
(In practice, m is deemed to be a constant and
hence we drop it from our running time analysis.)
For a pixel its
color. Thus, the notation p 2 I c is synonymous
with For convenience, we use the
L1-norm to measure the distance between pix-
els, i.e., for pixels
define
jg. We
denote the set f1; ng by [n].
3.2. Definitions
The histogram h of I is defined for i 2 [m] by
p2I
For any pixel in the image, h c i (I)=n 2 gives the
probability that the color of the pixel is c i . The
histogram can be computed in O(n 2 ) time, which
is linear in the image size.
Let a distance d 2 [n] be fixed a priori. Then,
the correlogram of I is defined for
[d] as
p22I
Given any pixel of color c i in the image, fl (k)
gives the probability that a pixel at distance k
away from the given pixel is of color c j . Note
that the size of the correlogram is O(m 2 d). The
autocorrelogram of I captures spatial correlation
between identical colors only and is defined by
c;c
This information is a subset of the correlogram
and requires only O(md) space.
While choosing d to define the correlogram, we
need to address the following. A large d would
result in expensive computation and large storage
requirements. A small d might compromise the
quality of the feature. We consider this issue in
Section 4.1.
Example 1. Consider the simple case when
8. Two sample images are shown
in
Figure
2. The autocorrelograms corresponding
to these two images are shown in Figure 3. The
change of autocorrelation of the foreground color
(yellow) with distance is perceptibly different for
these images. 3
3.3. Distance Metrics
The L 1 and L 2 norms are commonly used distance
metrics when comparing two feature vectors. In
practice, the L 1 norm performs better than the L 2
norm because the former is more robust to outliers
[38]. Hafner et al. [14] suggest using a more
sophisticated quadratic form of distance metric,
which tries to capture the perceptual similarity
between any two colors. To avoid intensive computation
of quadratic functions, they propose to
use low-dimensional color features as filters before
using the quadratic form for the distance metric.
We will use the L 1 norm for comparing histograms
and correlograms because it is simple and
robust. The following formulae are used to compute
the distance between images I and I
From these equations, it is clear that the contributions
of different colors to the dissimilarity are
equally weighted. Intuitively, however, this contribution
should be weighted to take into account
some additional factors.
Example 2. Consider two pairs of images
Even though the absolute difference in the pixel
count for color bucket i is 50 in both cases, clearly
the difference is more significant for the second
pair of images.
Thus, the difference jh c i Equation
(4) should be given more importance if
small and vice versa. We
could therefore consider replacing the expression
Equation 4 by
(the 1 in the denominator is added to prevent division
by zero).
This intuition has theoretical justification in
[17] which suggests that it is sometimes better
Fig. 2. Sample images: image 1, image 2.
Fig. 3. Autocorrelograms for images in Figure 2.
to use a "relative" measure of distance d - . For
defined by
It is straightforward to verify that (i) d - is a
metric, (ii) for
d- can be applied to feature vectors also. We
have set 1. So the d 1 distance metric for
histograms and correlograms is:
3.4. An Algorithm
In this section, we look at an efficient algorithm
to compute the correlogram. Our algorithm is
amenable to easy parallelization. Thus, the computation
of the correlogram could be enormously
speeded up.
First, to compute the correlogram, it suffices to
compute the following count (similar to the cooccurrence
matrix defined in [16] for texture analysis
of gray images)
because,
The denominator is the total number of pixels at
distance k from any pixel of color c i . (The factor
8k is due to the properties of L1 -norm.) The
naive algorithm would be to consider each
of color c i and for each k 2 [d], count all
of color c j with Unfortunately,
this takes O(n 2 d 2 ) time. To obviate this expensive
computation, we define the quantities
which count the number of pixels of a given color
within a given distance from a fixed pixel in the
positive horizontal/vertical directions.
Our algorithm works by first computing - c j ;v
and - c j ;h
. We now give an algorithm with a running
time of O(n 2 d) based on dynamic programming
The following equation is easy to check
with the initial condition
p (k) is computed for all p 2 I and for each
using Equation 14. The correctness
of this algorithm is obvious. Since we do O(n 2 )
work for each k, the total time taken is O(n 2 d).
In a similar manner, - c;v
can also be computed
efficiently. Now, modulo boundaries, we have
This computation takes just O(n 2 ) time.
The hidden constants in the overall running
time of O(n 2 d) are very small and hence this algorithm
is extremely efficient in practice for small
d.
3.5. Some Extensions
In this section, we will look at some extensions
to color correlograms. The general theme behind
the extensions are: (1) improve the storage efficiency
of the correlogram while not compromising
its image discrimination capability, and (2) use additional
information (such as edge) to further refine
the correlogram, boosting its image retrieval
performance. These extensions can not only be
used for the image retrieval problem, but also in
other applications like cut-detection (see Section
6.2).
3.5.1. Banded Correlogram In Section 3.4, we
saw that the correlogram (resp. autocorrelogram)
takes m 2 d (resp. md) space. Though we will
see that small values of d actually suffices, it will
be more advantageous if the storage requirements
were trimmed further. This leads to the definition
of banded correlogram for a given b. (For simplic-
ity, assume b divides d.) For
In a similar manner, the banded autocorrelogram
can also be defined. The space requirements
for the banded correlogram (resp. banded
that when measures the density of a
color c j near the color c i , thus suggesting the local
structure of colors.) The distance metrics defined
in Equation 5 is easily extended to this case.
Note that banded correlograms are seemingly
more susceptible to false matches since
which follows by triangle inequality. Although the
banded correlograms have less detailed information
as correlograms, our results show that the
approximation of fl by fl has only negligible effect
on the quality of the image retrieval problem and
other applications.
3.5.2. Edge Correlogram The idea of exploiting
spatial correlation between pairs of colors can also
be extended to other image features such as edges.
In the following, we augment the color correlogram
with edge information. This new feature,
called the edge correlogram, is likely to have increased
discriminative power.
is the edge information
of image I, i.e., is on an edge and 0
otherwise. (Such information can be obtained using
various edge-detection algorithms.) Now, the
question is if this useful information can be combined
with (auto)correlograms so as to improve
the retrieval quality even further. We outline one
scheme to do this. In this scheme, each of the m
color bins is refined to get I 0 with 2m bins.
I
ae
It is easy to see that the definition of both correlograms
and autocorrelograms directly extend to
this case. The storage requirements become 4m 2 d
(resp. 2md) for correlograms (resp. autocorrelo-
grams). Note however that the number of p such
that usually very small. Since we
mostly deal with autocorrelograms, the statistical
importance of ff (k)
c+ becomes insignificant, thus
rendering the whole operation meaningless. A solution
to this problem is to define edge autocorrelogram
in which cross correlations between c + and
are also included. The size of edge autocorrelogram
is thus only 4md. We can further trim the
storage by the banding technique in Section 3.5.1.
4. Image Retrieval using Correlograms
The image retrieval problem is the following: let S
be an image database and Q be the query image.
Obtain a permutation of the images in S based on
Q, i.e., assign rank(I) 2 [jSj] for each I 2 S, using
some notion of similarity to Q. This problem
is usually solved by sorting the images I 2 S according
to jf(I) \Gamma f(Q)j, where f(\Delta) is a function
computing feature vectors of images and j \Delta j f is
some distance metric defined on feature vectors.
Performance Measure Let fQ be the
set of query images. For a query Q i , let I i be
the unique correct answer. The following are two
obvious performance measures:
1. r-measure of a method which sums up over all
queries, the rank of the correct answer, i.e.,
use the average r-
measure which is the r-measure divided by the
number of queries q.
2. -measure of a method which is given by
the sum (over all
queries) of the precision at recall equal to 1.
The average p 1 -measure is the p 1 -measure divided
by q.
Images ranked at the top contribute more to the
Note that a method is good if it has
a low r-measure and a high p 1 -measure.
3. Recall vs. Scope: Let Q be a query and let
a be multiple "answers" to the query (Q
is called a category query). Now, the recall r is defined
for a scope s ? 0 as jfQ 0
Since it is very hard to identify all relevant images
in a huge database like ours, using this measure
is much simpler than using the traditional recall
vs. precision. Note however that this measure still
evaluates the effectiveness of the retrieval [18, 40].
Organization Section 4.1 lists some efficiency
considerations we take into account while using
correlograms for image retrieval. Section 4.2 describes
our experimental setup and Section 4.3
provides the results of the experiments.
4.1. Efficiency Considerations
As image databases grow in size, retrieval systems
need to address efficiency issues in addition to the
issue of retrieval effectiveness. We investigate several
general methods to improve the efficiency of
indexing and searching, without compromising effectiveness
Parallelization The construction of a featurebase
for an image database is readily parallelizable. We
can divide the database into several parts, construct
featurebases for these parts simultaneously
on different processors, and finally combine them
into a single featurebase for the entire database.
Partial Correlograms In order to reduce space
and time requirements, we choose a small value
of d. This does not impair the quality of correlograms
or autocorrelograms very much because
in an image, local correlations between colors are
more significant than global correlations. Some-
times, it is also preferable to work with distance
sets, where a distance set D is a subset of [d]. We
can thus cut down storage requirements, while still
using a large d. Note that our algorithm can be
modified to handle the case when D ae [d].
Though in theory the size of a correlogram is
d) (and the size of an autocorrelogram is
O(md)), we observe that the feature vector is not
always dense. This sparsity could be exploited to
cut down storage and speed up computations.
Filtering There is typically a tradeoff between
the efficiency and effectiveness of search algo-
rithms: more sophisticated methods which are
computationally more expensive tend to yield better
retrieval results. Good results can be obtained
without sacrificing too much in terms of efficiency
by adopting a two-pass approach [14]. In the first
pass, we retrieve a set of N images in response to
a query image by using an inexpensive (and possibly
crude) search algorithm. Even though the
ranking of these images could be unsatisfactory,
we just need to guarantee that useful images are
contained in this set. We can then use a more
sophisticated matching technique to compare the
query image to these N images only (instead of the
entire database), and the best images are likely
to be highly ranked in the resulting ranked list.
It is important to choose an appropriate N in
this approach 4 - the initially retrieved set should
be good enough to contain the useful images and
should be small enough so that the total retrieval
time is reduced.
4.2. Experimental Setup
The image database consists of 14,554 color JPEG
images of size 232 \Theta 168. This includes 11,667 images
used in Chabot [30], 1,440 images used in
QBIC [8], and 1,005 images from Corel. It also
includes a few groups of images in PhotoCD format
and a number of MPEG video frames from
the web [31]. Our heterogeneous image database
is thus very realistic and helps us evaluate various
methods. It consists of images of animals, hu-
mans, landscapes, various objects like tanks, flags,
etc.
We consider the RGB colorspace with quantization
into 64 colors. To improve performance, we
first smooth the images by a small amount. We
use the distance set
computing the autocorrelograms. We use
for the banded autocorrelogram. This results in a
feature vector that is as small as the histogram.
Our query set consists of 77 queries. Each of
these queries was manually picked and checked to
have a unique answer. Therefore they serve as
ground truth for us to compare different methods
in a fair manner. In addition, the queries are chosen
to represent various situations like different
views of the same scene, large changes in appear-
ance, small lighting changes, spatial translations,
etc. We also run 4 category queries, each with
a
Query
movie scenes), and Query 4 moving car
images). The correct answers to the unique answer
queries are obtained by an exhaustive manual
search of the whole image database.
We use the L 1 norm for comparing feature
vectors. The feature vectors we use are histograms
coherent vectors with successive
refinement (ccv(s))[31], autocorrelograms
(auto), banded autocorrelograms (b-auto), edge
autocorrelograms (e-auto), and banded edge auto-
correlograms (be-auto). Examples of some queries
and answers (and the rankings according to various
methods) are shown in Figure 4. The query
response time for autocorrelograms is under 2 sec
on a Sparc-20 workstation (just by exhaustive linear
search).
4.3. Results
4.3.1. Unique Answer Queries Observe that all
the correlogram-related methods are on par in
terms of performance and significantly better than
histogram and CCV(s). On average, in the
autocorrelogram-based method, the correct answer
shows up second while for histograms and
CCV-based methods, the correct answer shows up
at about 80th and 40th places. The banded au-
tocorrelograms perform only slightly worse than
the original ones. With the same data size as the
histograms, the banded autocorrelograms retrieve
the correct answers more than 79 rank lower than
histograms. Since the autocorrelograms achieve
really good retrieval results, the edge correlograms
do not generate too much improvement.
Also note that the banded edge autocorrelo-
grams have higher p 1 -measure than the edge au-
tocorrelograms. This is because most of the ranks
go higher while only a few go lower. Though the r-
measure becomes worse, the p 1 -measure becomes
better. It is remarkable that banded autocorrelogram
has the same amount of information as the
histogram, but seems lot more effective than the
latter.
hist: 496. ccv(s): 245. auto: 2.
b-auto: 2. e-auto: 2. be-auto: 2.
hist: 411. ccv(s): 56. auto: 1.
b-auto: 1. e-auto: 1. be-auto: 1.
hist: 367. ccv(s): 245. auto: 1.
b-auto: 1. e-auto: 8. be-auto: 9.
hist: 310. ccv(s): 160. auto: 5.
b-auto: 5. e-auto: 1. be-auto: 1.
(Fig. 4. Sample queries and answers with ranks for various methods. (Lower ranks are better.) (Continued on next page))
hist: 198. ccv(s): 6. auto: 12.
b-auto: 13. e-auto: 5. be-auto: 4.
hist: 119. ccv(s): 25. auto: 2.
b-auto: 3. e-auto: 1. be-auto: 1.
hist: 19. ccv(s): 74. auto: 1.
b-auto: 1. e-auto: 1. be-auto: 1.
hist: 78. ccv(s): 7. auto: 2.
b-auto: 2. e-auto: 2. be-auto: 2.
Fig. 5. continued
Table
1. Comparison of various image retrieval methods.
Method hist ccv(s) auto b-auto e-auto be-auto
r-measure 6301 3272 172 196 144 157
avg. r-measure 81.8 42.5 2.2 2.5 1.9 2.0
p1-measure 21.25 31.60 58.06 55.77 60.26 60.88
avg. p1-measure 0.28 0.41 0.75 0.72 0.78 0.79
For 73 out of 77 queries, autocorrelograms perform
as well as or better than histograms. In
the cases where autocorrelograms perform better
than color histograms, the average improvement
in rank is 104 positions. In the four cases
where color histograms perform better, the average
improvement is just two positions. Autocor-
relograms, however, still rank the correct answers
within top six in these cases.
Statistical Significance Analysis We adopt the
approach used in [31] to analyze the statistical
significance of the improvements. We formulate
the null hypothesis H 0 which states that the autocorrelogram
method is as likely to cause a negative
change in rank as a non-negative one. Under
the expected number of negative changes
is with a standard deviation
4:39. The actual number of negative
changes is 4, which is less than M \Gamma 7oe. We can reject
H 0 at more than 99:9% standard significance
level.
For 67 out of 77 queries, autocorrelograms perform
as well as or better than CCV(s). In the
cases where autocorrelograms perform better than
CCV(s), the average improvement in rank is 66
positions. In the ten cases where CCV(s) perform
better, the average improvement is two positions.
Autocorrelograms, however, still rank the correct
answers within top 12 in these cases. Again, statistical
analysis proves that autocorrelograms are
better than CCV(s).
From a usability point of view, we make the
following observation. Given a query, the user is
guaranteed to locate the correct answer by just
checking the top two search results (on average) in
the case of autocorrelogram. On the other hand,
the user needs to check at least the top 80 search
results (on average) to locate the correct answer
in the case of histogram (or top 40 search results
for the CCV(s)). In practice, this implies that the
former is a more "usable" image retrieval scheme
than the latter two.
4.3.2. Recall Comparison Table 2 shows the
performance of three features on our four category
queries. The L 1 distance metric is used. Once
again, autocorrelograms perform the best.
4.3.3. Relative distance metric Table 3 compares
the results obtained using d 1 and L 1 distance
measures on different features (64 colors).
Using d 1 distance measure is clearly superior. The
improvement is specially noticeable for histograms
and CCV(s) (for instance, for the owl images in
Figure
6).
A closer examination of the results shows, how-
ever, that there are instances where the d 1 distance
measure performs poorly compared to the
distance measure on histograms and CCV(s).
An example is shown in Figure 7.
It seems that the failure of the d 1 measure is related
to the large change of overall image brightness
(otherwise, the two images are almost identi-
cal). We need to examine such scenarios in greater
detail. Autocorrelograms, however, are not affected
by d 1 in this case. Nor does d 1 improve the
performance of autocorrelogram much. In other
words, autocorrelograms seem indifferent to the
Table
2. Scope vs. recall results for category queries. (Larger numbers indicate better performance.)
Recall
Query 1 Query 2
Scope hist ccv(s) auto hist ccv(s) auto
Recall
Query 3 Query 4
Scope hist ccv(s) auto hist ccv(s) auto
Table
3. Comparison of L1 and d1
Method hist ccv(s) auto hist ccv(s) auto
distance measure d 1 distance measure
r-measure 6301 3272 172 926 326 164
avg. r-measure
p1-measure 21.25 31.60 58.03 47.94 52.09 59.92
avg.
hist: 540. ccv(s): 165. auto:4. (L 1 )
hist: 5. ccv(s): 4. auto:4. (d 1 )
Fig. 6. A case where d 1 is much better than L 1 .
hist: 1. ccv(s): 1. auto: 1. (L 1 )
hist: 213. ccv(s): 40. auto: 1. (d 1 )
Fig. 7. A case where d 1 is worse than L 1 .
distance measure. This needs to be formally
4.3.4. Filtering Table 4 shows the results of applying
a histogram filter before using the autocor-
5Fig. 8. The query image, the image ranked one, and the image ranked two.
Fig. 9. The change of autocorrelation of yellow color with distance.
relogram (we use 64-color histograms and auto-
correlograms).
As we see, the quality of retrieval even improves
somewhat (because false positives are eliminated).
As anticipated, the query response time is less
since we consider the correlograms of only a small
filtered subset of the featurebase.
4.3.5. Discussion The results show that the autocorrelogram
tolerates large changes in appearance
of the same scene caused by changes in viewing
positions, changes in the background scene,
partial occlusions, camera zoom that causes radical
changes in shape, etc. Since we chose small
values f1; 3; 5; 7g for the distance set D, the auto-
Table
4. Performance of auto(L 1 ) with hist(d 1 ) filter.
Method unfiltered filtered
r-measure 172 166
correlogram distills the global distribution of local
color spatial correlations. In the case of camera
zoom (for example, the third pair of images on
the left column of Figure 4), though there are big
changes in object shapes, the local color spatial
correlations as well as the global distribution of
these correlation do not change that much. We
illustrate this by looking at how the autocorrelation
of yellow color changes with distance in the
following three images (Figure 8). Notice that the
size of yellow circular and rectangular objects in
the query image and the image ranked one are dif-
ferent. Despite this, the correlation of yellow with
yellow for the local distance of the image ranked
one is closer to that of the query image than the
image ranked, say, two (Figure 9).
5. Image Subregion Querying Using Correlogram
The image subregion querying problem is the fol-
lowing: given as input a subregion query Q of an
image I and an image set S, retrieve from S those
images Q 0 in which the query Q appears according
to human perception (denoted Q ' Q 0 ). The
set of images might consist of a database of still
images, or videos, or some combination of both.
The problem is made even more difficult than image
retrieval by a wide variety of effects that cause
the same object to appear different (such as changing
viewpoint, camera noise and occlusion). The
image subregion querying problem arises in image
retrieval and in video browsing. For example, a
user might wish to find other pictures in which a
given object appears, or other scenes in a video
with a given appearance of a person.
Performance Measure We use the following measures
to evaluate the performance of various competing
image subregion querying algorithms. If
are the query images, and for the i-th
query
a i are the only images
that "contain" Q i , (i.e., Q i "appears in" I (i)
due to the presence of false matches
the image subregion querying algorithm may return
this set of "answers" with various ranks.
1. Average r-measure gives the mean rank of the
answer-images averaged over all queries. It is
given by either of the following expressions:q
a i
rank(I (i)
a i
rank(I (i)
a i (20)
The macroaveraged r-measure given by Equation
19 treats all queries with equal impor-
tance, whereas the microaveraged r-measure
defined by Equation 20 gives greater weightage
to queries that have a larger number of
answers. In both cases a lower value of the
r-measure indicates better performance.
2. Average precision for a query Q i is given by
a i
are the answers for query Q i in the order that
they were retrieved. This quantity gives the
average of the precision values over all recall
points (with 1:0 being perfect performance).
3. Recall/Precision vs. Scope: For a query Q i
and a scope s ? 0, the recall r is defined as
jfI (i)
and the precision
p is defined as jfI (i)
These measures are simpler than the traditional
average precision measure but still evaluate
the effectiveness of the subregion query
retrieval. For both measures, higher values
indicate better performance.
Organization Section 5.1 explains our approach
to the problem. Section 5.2 describes the experimental
setup and Section 5.3 presents the results.
5.1. Correlogram Intersection
The image subregion querying problem is a harder
problem than image retrieval based on whole image
matching. To avoid exhaustive searching sub-regions
in an image, one scheme is to define intersection
of color histograms [43]. The scheme can
be interpreted in the following manner. (This interpretation
helps us to generalize the method to
correlograms easily.)
Given the histograms for a query Q and an image
I, the intersection of these two histograms can
be considered as the histogram of an abstract entity
notated as the intersection Q " I, (which will
not be defined but serves as a conceptual and notational
convenience only). With the color count
of the intersection defined as
we can define the intersection of the histograms of
Q and I as
Note that this definition is not symmetric in Q
and I. The distance is a measure
of the presence of Q in I. When Q is a subset of
all the color counts
in Q are less than those in I, and the histogram
intersection simply gives back the histogram for
Q.
In an analogous manner, we define the intersection
correlogram as the correlogram of the intersection
merely an abstract entity.)
With the count
we can define the intersection correlogram as follows
Again we measure the presence of Q in I by the
distance jQ\GammaQ"Ij fl;L 1 , say if
were chosen. If Q ' I, then the latter "should
have at least as many counts of correlating color
pairs" as the former. Thus the counts \Gamma and H for
are again those of Q, and the correlogram
of becomes exactly the correlogram of Q,
giving
We see that the distance between Q and Q " I
vanishes when Q is actually a subset of I; this
affirms the fact that both correlograms and histograms
are global features. Such a stable property
is not satisfied by all features - for instance,
spatial coherence [31] is not preserved under sub-set
operations. Therefore, methods for subregion
querying based on such unstable features are not
likely to perform as good as the histogram or correlogram
based methods.
5.2. Experiments
The image database is the same as in Section 4.2.
We use 64 color bins for histograms and autocor-
relograms. The distance set for autocorrelograms
is 7g. Our query set for this task
consists of queries. Queries have 2 to 16 answer
images with the average number of answers
per query being nearly 5. The query set is constructed
by selecting "interesting" portions of images
from the image database. The answer images
contain the object depicted in the query, but often
with its appearance significantly changed due
to changes in viewpoint, or different lighting, etc.
Examples of some queries and answers (and the
rankings according to the histogram and autocorrelogram
intersection methods) are shown in Figure
10.
5.3. Results
The histogram and autocorrelogram intersection
methods for subregion querying are compared in
Tables
5 and 6. For each of the evaluation measures
proposed above, the autocorrelogram performs
better. The average rank of the answer
images improves by over positions when the
autocorrelogram method is used, and the average
precision figure improves by an impressive
56% (see
Table
5). Table 6 shows the precision
and recall values for the two methods at various
scopes. Once again, autocorrelograms perform
consistently better than histograms at all scopes.
Doing a query-by-query analysis, we find that au-
tocorrelograms do better in terms of the average
r-measure on 23 out of the queries. Similarly,
autocorrelograms yield better average precision on
26 out of queries. Thus, for a variety of performance
metrics, autocorrelograms yield better
results. This suggests that the autocorrelogram
is a significantly superior method for subregion
querying problem.
6. Other Applications of Correlograms
6.1. Localization Using Correlograms
The location problem is the following: given a
query image (also referred to as the target or
model) Q and an image I such that Q ' I, find
the "location" in I where Q is "present". It is
hard to define the notion of location mathematically
because the model is of some size. We use
the location of the center of the model for convenience
This problem arises in tasks such as real-time
object tracking or video searching, where it is necessary
to localize the position of an object in an
image. Given an algorithm that solves the location
problem, tracking an object Q in an image
frame sequence ~ I = I I t is equivalent to
finding the location of Q in each of the I i 's. Efficiency
is also required in this task because huge
amounts of data need to be processed.
To avoid exhaustive searching in the whole image
(template matching is of such kind), histogram
backprojection was proposed to handle
the location problem efficiently. In the following,
we study the histogram backprojection algorithm
first. Then we show how the correlogram can be
used to improve the performance.
The location problem can be viewed as a special
case of the image retrieval problem in the following
manner. Let Ij p denote the subimage in I
of size Q located at position p. (The assumption
about the size of the subimage is without loss of
generality.) The set of all subimages Ij
present in I constitutes the image database and
Query auto: 1, hist: 1 auto: 6, hist:
Query auto: 2, hist: 50 auto: 5, hist: 113 auto: 9, hist: 220
Query auto: 6, hist: 6 auto: 19, hist: 48 auto: 9, hist: 57
Query auto: 4, hist: 23 auto: 6, hist:
Query auto: 1, hist: 1 auto: 3, hist: 38 auto: 28, hist:
Query auto: 1, hist: 2 auto: 2, hist: 6 auto: 3, hist: 5
Fig. 10. Sample queries and answer sets with ranks for various methods. (Lower ranks are better.)
Table
5. Performance of Histogram and Autocorrelogram Intersection methods - I. (Smaller r-measure and larger precision
are better.)
Method Avg. r-measure (macro) Avg. r-measure (micro) Avg. precision
Hist 56.3 61.3 0.386
Auto 22.5 29.1 0.602
Table
6. Performance of Histogram and Autocorrelogram Intersection methods - II. (Larger values are better.)
Hist 0.273 0.223 0.133 0.311 0.460 0.681
Auto 0.493 0.347 0.165 0.541 0.718 0.850
Q is the query image. The solution Ij p to this
retrieval problem gives p, the location of Q in I.
The above interpretation is a template matching
process. One straightforward approach to the location
problem is template matching. Template
matching takes the query Q as a template and
moves this template over all possible locations in
the image I to find the best match. This method
is likely to yield good results, but is computationally
expensive. Attempts have been made to make
template matching more efficient [36, 28, 2]. The
histogram backprojection method is one such approach
to this problem. This method has some
serious drawbacks, however. In the following, we
explain the problem with the histogram backprojection
scheme.
The basic idea behind histogram backprojection
is (1) to compute a "goodness value" for each pixel
in I (the goodness of each pixel is the likelihood
that this pixel is in the target); and (2) obtain the
subimage (and hence the location) whose pixels
have the highest goodness values.
Formally, the method can be described as fol-
lows. The ratio histogram is defined for a color c
as
The goodness of a pixel p 2 I c is defined to be
c;h (IjQ). The contribution of a subimage Ij p is
given by
Then, the location of the model is given by
p2I
The above method generally works well in prac-
tice, and is insensitive to changes of image resolution
or histogram resolution [43]. Note that
backprojecting the ratio-histogram gives the same
goodness value to all pixels of the same color. It
emphasizes colors that appear frequently in the
query but not too frequently in the image. This
could result in overemphasizing certain colors in
model image
incorrect answer
Fig. 11. False match of histogram-backprojection.
Q. A color c is said to be dominant in Q, if
c;h (IjQ) is maximum over all colors. If I has
a subimage Ij p (which may be totally unrelated
to Q) that has many pixels of color c, then this
method tends to identify Q with Ij p , thus causing
an error in some cases.
Figure
11 shows a simple example illustrating
this problem. Suppose Q has 6 black pixels and 4
white pixels, and image I has 100 black pixels and
100 white pixels. Then -
The location of the model according to the back-projection
method is in an entirely black patch,
which is clearly wrong.
Another problem with histogram backprojection
is inherited from histograms which have no
spatial information. Pixels of the same color have
the same goodness value irrespective of their posi-
tion. Thus, false matches occur easily when there
are multiple similarly colored objects, as shown in
the examples of red roses and zebras in Figure 12.
Performance Measure Let an indicator variable
loc(Q; I) be 1 if the location returned by a method
is within reasonable tolerance of the actual location
of Q in I. Then, given a series of queries
corresponding images I
the success ratio of the method is given by
For tracking an object Q in a sequence of frames
~
, the success ratio is therefore
Organization Section 6.1.1 introduces the correlogram
correction for the location problem. Section
6.1.2 contains the experiments and results.
6.1.1. Correlogram Correction To alleviate the
problems with histogram backprojection, we incorporate
local spatial correlation information by
using a correlogram correction factor in Equation
26. The idea is to integrate discriminating local
characteristics while avoiding local color template
matching [5]. We define a local correlogram
contribution based on the autocorrelogram of the
subimage Ij p so that the goodness of a pixel depends
on its position in addition to its color.
c (Q) is considered to be the average contribution
of pixel of color c in Q (for each distance
k).
For each pixel p 2 I, the local autocorrelogram
p is computed for each distance k 2 D
(D should contain only small values so that ff (k)
captures local information for each p)
where fpg represents the pixel p along with its
neighbors considered as an image. Now, the correlogram
contribution of p is defined as
In words, the contribution of p is the L 1 -distance
between the local autocorrelogram at p and the
part of the autocorrelogram for Q that corresponds
to the color of p.
Combining this contribution with Equation 26,
the final goodness value of a subimage Ij p is given
by
It turns out that the correlogram contribution
by itself is also sensitive and occasionally overemphasizes
less dominant colors. Suppose c is a less
dominant color (say, background color) that has
a high autocorrelation. If I has a subimage Ij p
(which may be totally irrelevant to Q) that has
many pixels of color c with high autocorrelations,
then correlogram backprojection has a tendency
to identify Q with Ij p , thus potentially causing
an error. Since the problems with histograms and
correlograms are in some sense complementary to
each other, the best results are obtained when the
goodness of a pixel is given by a weighted linear
combination of the histogram and correlogram
backprojection contributions - adding the local
correlogram contribution to histogram backprojection
remedies the problem that histograms do
not take into account any local information; the
histogram contribution ensures that background
colors are not overemphasized. We call this correlogram
correction.
This can also be understood by drawing an analogy
between this approach and the Taylor expan-
sion. The goodness value obtained from histogram
backprojection is like the average constant value in
the Taylor expansion; the local correlogram contribution
is like the first order term in the approx-
imation. Therefore, the best results are obtained
when the goodness value of a pixel is a weighted
linear combination of the histogram backprojection
value and the correlogram contribution.
6.1.2. Experiments and Results We use the
same database to perform the location experi-
ments. A model image and an image that contains
the model are chosen. For the location prob-
lem, 66 query images and 52 images that contain
these models are chosen and tested. Both the histogram
backprojection and autocorrelogram correction
are tried.
the parameters.
For the tracking problem, we choose three
videos bus(133 frames), clapton (44 frames), sky-
dive (85 frames). We use
0:8 for this problem.
For the location problem, Table 7 shows the results
for 66 queries, and Figure 12 shows some
examples.
For the tracking problem, Table 8 shows the result
of histogram backprojection and correlogram
correction for the three test videos. These results
clearly show that correlogram correction alleviates
many of the problems associated with simple histogram
backprojection.
Figure
13 shows sample outputs.
6.2. Cut Detection Using Correlograms
The increasing amount of video data requires automated
video analysis. The first step to the automated
video content analysis is to segment a
video into camera shots (also known as key frame
extraction). A camera shot ~ I = I I t is an
unbroken sequence of frames from one camera. If
~
J denotes the sequence of cuts, then a cut J j occurs
when two consecutive frames hI are
from different shots.
Cut detection algorithms assume that consecutive
frames in a same shot are somewhat more
similar than frames in a different shot (other gradual
transition, such as fade and dissolve, are not
studied here because certain mathematical models
can be used to treat these chromatic editing
effects). Different cut detectors use different features
to compare the similarity between two consecutive
frames, such as pixel difference, statistical
differences, histogram comparisons, edge dif-
ferences, etc [1]. One way to detect cuts using
a feature f is by ranking hI according to
I) be the number of actual
cuts in ~ I and rank(J i ) be the rank of the cut J i
according to this ranking.
Histograms are the most common used image
features to detect cuts because they are efficient to
Table
7. Results for location problem (66 queries).
Method hist auto
Success Ratio 0.78 0.96
Table
8. Results (success ratios) for the tracking problem.
Method hist auto
bus 0.93 0.99
clapton 0.44 0.78
skydive 0.96 0.96
Fig. 12. Location problem: histogram output, query image, and correlogram output.
compute and insensitive to camera motions. His-
tograms, however, are not robust to local changes
in images that false positives easily occurs in this
case (see Figure 14). Since correlograms have been
shown to be robust to large appearance changes
for image retrieval, we use correlograms for cut
detection.
Performance Measure Recall and precision are
usually used to compare the performance of cut
detection. However, it is difficult to measure the
performance of different algorithms based on recall
vs. precision curves [1]. Therefore we look
at recall and precision values separately. In order
to avoid using "optimal" threshold values, we
use precision vs. scope to measure false positives
and recall vs. scope to measure false negatives
(misses). We choose scope values to be the exact
cut number cuts( ~
I) and 2 cuts( ~
I). We also use the
excessive rank value, which is defined by
cuts( ~ I)
and the average precision value which is defined
cuts( ~ I)
Note that a smaller excessive rank value and a
larger average precision value indicate better result
(perfect performance would have values 0 and
6.2.1. Experiments and Results We use 64 colors
for histograms, and banded autocorrelograms
which have the same size as histograms. We use
5 video clips from television, movies, and com-
mercials. The clips are diverse enough to capture
different kinds of common scenarios that occur in
practice. The results are shown in Table 9 and
Table
10.
The results of our experiments show that
banded autocorrelograms are more effective than
histograms while the two have the same amount
of information. It is certainly more efficient than
dividing an image into 16 subimages [15]. Thus
the autocorrelogram is a promising tool for cut
detection.
Fig. 13. Tracking problem: histogram output, query image, and correlogram output.
Fig. 14. Cut detection: False cuts detected by histogram but not by correlogram.
Table
9. Recall vs. Scope for cut detection. (Smaller values are better.)
hist banded-auto
I) ex. rank value
Table
10. Precision vs. Scope for cut detection. (Larger values are better.)
hist banded-auto
Avg. Prec. Value cuts( ~ I) 2 cuts( ~
I) Avg. Prec. Value
7. Conclusions
In this paper, we introduced the color correlogram
- a new image feature - for solving several problems
that arise in content-based image retrieval
and video browsing. The novelty in this feature
is the characterization of images in terms of the
spatial correlation of colors instead of merely the
colors per se. Experimental evidence suggests that
this information discriminates between "different"
images and identifies "similar" images very well.
We show that correlograms can be computed, pro-
cessed, and stored at almost no extra cost compared
to competing methods, thereby justifying
using this instead of many other features to get
better image retrieval quality.
The most important application of correlograms
is to content-based image retrieval (CBIR) sys-
tems. Viewed in this context, a correlogram
is neither a region-based nor a histogram-based
method. Unlike purely local properties, such as
pixel position, and gradient direction, or purely
global properties, such as color distribution, a correlogram
takes into account the local color spatial
correlation as well as the global distribution of
this spatial correlation. While any scheme that is
based on purely local properties is likely to be sensitive
to large appearance changes, correlograms
are more stable to tolerate these changes and while
any scheme that is based on purely global properties
is susceptible to false positive matches, correlograms
seem to be scalable for CBIR. This is corroborated
by our extensive experiments on large
image collections, where we demonstrate that correlograms
are very promising for CBIR.
One issue that still needs to be resolved satisfactorily
is the following: in general, illumination
changes are very hard to handle in color-based
CBIR systems [12, 45, 10, 39]. During our experi-
ments, we encountered this problem occasionally.
Though the correlogram method performs better
on a relative scale, its absolute performance is not
fully satisfactory. The question is, can correlo-
grams, with some additional embellishments, be
made to address this specific problem?
On a related note, it also remains to be seen
if correlograms, in conjunction with other fea-
tures, can enhance retrieval performance. For in-
stance, how will the correlogram perform if shape
information is used additionally? This brings up
the question of object-level retrieval using correlo-
grams. More work needs to be done in this regard
as to finding a better representation for objects.
Further applications of color correlograms are
image subregion querying and localization, which
are indispensable features of any image management
system. Our notions of correlogram intersection
and correlogram correction seem to perform
well in practice. There is room for improvement
of course, and these need to be investigated in
greater detail. We also apply correlograms to the
problem of detecting cuts in video sequences. An
interesting question that arises here is, can this
operation be done in the compressed domain [48]?
This would cut down the computation time drastically
and make real-time processing feasible.
Another major challenge in this context is:
what distance metric for comparing images is close
to the human perception of similarity? Does a
measure need to be a metric [25]? We also plan
to use supervised learning to improve the results
of image retrieval and the subregion querying task
(we have some initial results in [19]).
In general, the algorithms we propose for various
problems are not only very simple and inexpensive
but are especially easy to incorporate into
a CBIR system if the underlying indexing scheme
is correlogram-based. It pays off more in general
if there is a uniform feature vector that is universally
applicable to providing various functionalities
expected of a CBIR system (like histograms
advocated in [43]). It is unreasonable to expect
any CBIR system to be absolutely fool-proof; fur-
thermore, it is needless to state that the correlogram
is not the panacea. The goal, however, is to
build relatively better CBIR systems. Based on
various experiments, we feel that there is a compelling
reason to use correlograms as one of the
basic building blocks in such systems.
Acknowledgements
Jing Huang and Ramin Zabih were supported
by the DARPA grant DAAL 01-97-K-0104. S
Ravi Kumar was supported by the ONR Young
award N00014-93-1-0590, the NSF
grant DMI-91157199, and the career grant CCR-
9624552. Mandar Mitra was supported by the
NSF grant IRI 96-24639. Wei-Jing Zhu was supported
by the DOE grant DEFG02-89ER45405.
Notes
1. In our database of 14,554 images, the right image is
considered the 353-rd most similar with respect to the
left image by color histogram.
2. The term "correlogram" is adapted from spatial data
analysis: "correlograms are graphs (or tables) that show
how spatial autocorrelation changes with distance."
3. Interestingly, histogram or CCV may not be able to
distinguish between these two images.
4. Equivalently, we could select some threshold image
score.
--R
"A comparison of video shot boundary detection techniques,"
"Using color templates for target identification and tracking,"
"Content-based image retrieval systems,"
"PicHunter: Bayesian relevance feed-back for image retrieval,"
"Finding waldo, or focus of attention using local color information,"
"Query analysis in a visual information retrieval context,"
"Finding naked people,"
"Query by image and video con- tent: The QBIC system,"
"Finding pictures of objects in large collections of images,"
"Color constant color in- dexing,"
"An image database system with content capturing and fast image indexing abilities,"
"Intelligent Image Databases: Towards Advanced Image Retrieval,"
"Localising overlapping parts by searching the interpretation tree,"
"Efficient color histogram indexing for quadratic form distance functions,"
"Digital video indexing in multimedia systems,"
"Statistical and structural approaches to texture,"
"Decision theoretic generalization of the PAC model for neural net and other learning appli- cations,"
"An integrated color-spatial approach to content-based image retrieval,"
"Combin- ing supervised learning with color correlograms for content-based image retrieval,"
"Spatial color indexing and applications,"
"Image indexing using color correlograms,"
"Comparing images using the Hausdorff distance,"
"Object recognition using subspace methods,"
"Object recognition using alignment,"
"Con- densing image databases when retrieval is based on non-metric distances,"
"Representation and recognition of the spatial organization of three-dimensional shapes,"
"On representation and matching of multi-colored objects,"
"Focused color intersection with efficient searching for object detection and image retrieval,"
"Visual learning and recognition of 3-D objects from appearance,"
"Chabot: Retrieval from a relational database of images,"
"Histogram refinement for content-based image retrieval,"
"Comparing images using color coherence vectors,"
"Photobook: Content-based manipulation of image databases,"
"Object indexing using an iconic sparse distributed memory,"
"Content-based image retrieval using color tuple histograms,"
"Using probabilistic domain knowledge to reduce the expected computational cost of template matching,"
"Machine perception of three-dimensional solids,"
"Robust regression and outlier detection,"
"Combining color and geometric information for the illumination invariant recognition of 3-D objects,"
"Tools and techniques for color image retrieval,"
"Color indexing with weak spatial constraints,"
"The capacity of color histogram indexing,"
"Color indexing,"
"Data and model-driven selection using color regions,"
"Indexing colored surfaces in images,"
"Spatial data analysis by example. Vol I.,"
"Color distribution analysis and quantization for image retrieval,"
"Rapid scene analysis on compressed videos,"
--TR
--CTR
L. Kotoulas , I. Andreadis, Parallel Local Histogram Comparison Hardware Architecture for Content-Based Image Retrieval, Journal of Intelligent and Robotic Systems, v.39 n.3, p.333-343, March 2004
Naofumi Wada , Shun'ichi Kaneko , Tomoyuki Takeguchi, Using color reach histogram for object search in colour and/or depth scene, Pattern Recognition, v.39 n.5, p.881-888, May, 2006
Daniel Berwick , Sang Wook Lee, Spectral gradients for color-based object recognition and indexing, Computer Vision and Image Understanding, v.94 n.1-3, p.28-43, April/May/June 2004
Ba Tu Truong , Svetha Venkatesh , Chitra Dorai, Extraction of Film Takes for Cinematic Analysis, Multimedia Tools and Applications, v.26 n.3, p.277-298, August 2005
M. Das , E. M. Riseman, FOCUS: a system for searching for multi-colored objects in a diverse image database, Computer Vision and Image Understanding, v.94 n.1-3, p.168-192, April/May/June 2004
Yun Fan , Runsheng Wang, An image retrieval method using DCT features, Journal of Computer Science and Technology, v.17 n.6, p.865-873, November 2002
Robert T. Collins , Yanxi Liu , Marius Leordeanu, Online Selection of Discriminative Tracking Features, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.10, p.1631-1643, October 2005
Dirk Neumann , Karl R. Gegenfurtner, Image retrieval and perceptual similarity, ACM Transactions on Applied Perception (TAP), v.3 n.1, p.31-47, January 2006
J. Matas , D. Koubaroulis , J. Kittler, The multimodal neighborhood signature for modeling object color appearance and applications in object recognition and image retrieval, Computer Vision and Image Understanding, v.88 n.1, p.1-23, October 2002
Hau-San Wong , Horace H. Ip , Lawrence P. Iu , Kent K. Cheung , Ling Guan, Transformation of Compressed Domain Features for Content-Based Image Indexing and Retrieval, Multimedia Tools and Applications, v.26 n.1, p.5-26, May 2005
Multiresolution Histograms and Their Use for Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.7, p.831-847, July 2004
Dorin Comaniciu , Visvanathan Ramesh , Peter Meer, Kernel-Based Object Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.5, p.564-575, May
Arnold W. M. Smeulders , Marcel Worring , Simone Santini , Amarnath Gupta , Ramesh Jain, Content-Based Image Retrieval at the End of the Early Years, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.12, p.1349-1380, December 2000
Thomas B. Moeslund , Adrian Hilton , Volker Krger, A survey of advances in vision-based human motion capture and analysis, Computer Vision and Image Understanding, v.104 n.2, p.90-126, November 2006 | image indexing;model-based object recognition;spatial correlation;image features;content-based image retrieval |
340331 | Green''s Functions for Multiply Connected Domains via Conformal Mapping. | A method is described for the computation of the Green's function in the complex plane corresponding to a set of K symmetrically placed polygons along the real axis. An important special case is a set of K real intervals. The method is based on a Schwarz-Christoffel conformal map of the part of the upper half-plane exterior to the problem domain onto a semi-infinite strip whose end contains K-1 slits. From the Green's function one can obtain a great deal of information about polynomial approximations, with applications in digital filters and matrix iterations. By making the end of the strip jagged, the method can be generalized to weighted Green's functions and weighted approximations. | Introduction
. Green's functions in the complex plane are basic tools for the
analysis of real and complex polynomial approximations [10,21,24,30,32], which are
of central importance in the fields of digital signal processing [16,17,19] and matrix
iterations [5,6,11,20,28]. The aim of this article is to show that when the domain of
approximation is a collection of real intervals, or more generally symmetric polygons
along the real axis, the Green's function can be computed to high accuracy by Schwarz-
Christoffel conformal mapping. The computation of Schwarz-Christoffel maps has
become routine in recent years with the introduction of Driscoll's Matlab R
Christoffel Toolbox [4], a descendant of the second author's Fortran package SCPACK
[26].
The Green's function for a single interval can be obtained by a Joukowsky conformal
map, and related polynomial approximation problems were solved by Chebyshev
in the 1850s [3]. For two disjoint intervals, the Green's function can be expressed using
elliptic functions, and approximation problems were investigated by Akhiezer in the
1930s [2]. For K ? 2 intervals, the Green's function can be derived from a more general
Schwarz-Christoffel conformal map, and the formulas that result were stated in
a landmark article by Widom in 1969 [32]. Polynomial approximations can be readily
computed in this case by the Remes algorithm, which was adapted for digital filtering
by Parks and McClellan [3,18].
By a second conformal map, these ideas for intervals can be transplanted to the
more general problem of the Green's function for the region exterior to a string of
symmetric domains along the real axis ([32], p. 230). The conformal maps in question
can usually not be determined analytically, however, and even for the case of intervals
on the real axis, the formula for the Green's function requires numerical integration.
Here, for the case in which the domains are polygonal and thus can be reduced to
*Received by the editors XXX x, 19xx; accepted by the editors XXX x, 19xx. This work was
supported by NSF Grant DMS-9500975CS (US) and EPSRC Grant GR/M12414 (UK).
y Oxford University Computing Laboratory, Wolfson Building, Parks Road, Oxford, UK (embree
@comlab.ox.ac.uk).
z Oxford University Computing Laboratory, Wolfson Building, Parks Road, Oxford, UK (LNT@
comlab.ox.ac.uk).
intervals by a Schwarz-Christoffel map, we carry out the computations to put these
ideas into practice.
This article originated from discussions with Steve Mitchell of Cornell University,
who is writing a dissertation on applications of these ideas to the design of multirate
filters [15], and we are grateful to him for many suggestions. The contributions of
Jianhong Shen and Gilbert Strang at MIT were also a crucial help to us. Shen and
Strang have studied the accuracy of lowpass digital filters [22,23], and their asymptotic
formulas are directly connected to these Schwarz-Christoffel methods. In addition we
thank Toby Driscoll for his advice and assistance.
Our algorithm makes possible the computational realization of results in approximation
theory going back to Faber, Szeg-o, Walsh, Widom, and Fuchs, among others.
In particular, Walsh, Russell, and Fuchs obtained theorems concerning simultaneous
approximation of distinct entire functions on disjoint sets in the complex plane [8,9,30],
which we illustrate here in Section 6. Wolfgang Fuchs was for many years a leading
figure at Cornell University until his unfortunate death in 1997.
2. Description of the algorithm. Let E be a compact subset of the complex
plane consisting of K disjoint polygons P 1
numbered from left to right, with
each polygon symmetric with respect to the real axis. Degenerate cases are permitted
in which a portion of a polygon, or all of it, reduces to a line segment (but not to a
point). The Green's function problem for E is defined as follows:
Green's Function Problem. Find a real function g defined in the
region of the complex plane exterior to E satisfying
log jzj for z !1: (1c)
In (1a), \Delta denotes the Laplacian operator, and thus g is harmonic throughout the
complex plane exterior to the polygons P j . Standard results of potential theory ensure
that there exists a unique function g satisfying these conditions [12,13,29,32].
The solution to (1) can be constructed by conformal mapping. What makes this
possible is that the problem is symmetric with respect to the real axis, so it is enough
to find g(z) for the part of the upper half-plane Imz - 0 exterior to E; the solution in
the lower half-plane is then obtained by reflection (the Schwarz reflection principle).
This half-planar region is bounded by the upper halves of the polygons P j and by
the intervals along the real axis that separate the polygons, where the appropriate
boundary condition for g, by symmetry, is the Neumann condition
Restricting the map to the upper half-plane makes the domain simply-connected,
suggesting the following conformal mapping problem.
Conformal Mapping Problem. Find an analytic function f that
maps the portion of the upper half-plane exterior to E (Fig. 1a) conformally
onto a semi-infinite slit strip (Fig. 1c). Only the vertices f(S 1
-i, f(T K are prescribed. The remaining vertices,
and hence the lengths and heights of the slits, are not specified.
Once this mapping problem is solved, the function g defined by
GREEN'S FUNCTIONS FOR MULTIPLY CONNECTED DOMAINS 3
z-plane
(a)
(b)
a 1
a 2
(c)
Figure
1. Determination of the Green's function g(z) by a composition of two
conformal maps,
(z)). (a) The problem domain is restricted
to the part of the upper half-plane exterior to the polygons P j . (b) The first
Schwarz-Christoffel
takes this problem domain onto the upper half-plane itself.
(c) The second Schwarz-Christoffel map f 2
takes the upper half-plane to a slit semi-infinite
strip. The interval [s maps to a vertical boundary segment [oe
Re The gaps along the real axis between the intervals [s to horizontal
slits, and the semi-infinite intervals (\Gamma1; s 1
to semi-infinite
horizontal lines with imaginary parts - and 0, respectively. Only the real parts of the
left endpoints of the slits are prescribed; the imaginary parts and the right endpoints
as well as their pre-images a j , are determined as part of the calculation.
is the Green's function (1) for values of z in the upper half-plane. To see this we
note that g satisfies (1a) because the real part of an analytic function is harmonic,
it satisfies (1b) because of the form of the slit strip, and it satisfies (1c) because the
half-strip has height -. The existence and uniqueness of a solution to the Conformal
Mapping Problem can be derived from standard theory of conformal mapping [12] or
as a consequence of the corresponding facts for the Green's Function Problem.
w-plane
Figure
2. Composition of a third conformal map, the complex exponential, transplants
the slit strip to the exterior of a disk with radial spikes in the upper half-plane.
Reflection in the real axis completes the map of the problem domain of Fig. 1a, yielding
a function
The function f(z) is a conformal map from one polygon to another, and as such,
it can be represented by Schwarz-Christoffel formulas, an idea going back to Schwarz
and independently Christoffel around 1869. Figure 1 shows how f can be constructed
as the composition of two Schwarz-Christoffel maps. The first one maps the problem
domain in the upper half-plane to the upper half-plane, with the upper half of the
boundary of the polygon P j going to the interval [s This mapping problem is a
standard one, for which a parameter problem must be solved to determine accessory
parameters in the Schwarz-Christoffel formula; see [4,12,26]. By the second Schwarz-
Christoffel map, the upper half-plane is then mapped to the slit strip. This is a
Schwarz-Christoffel problem in the reverse, more trivial direction, with only a linear
parameter problem to be solved to impose the condition that the upper and lower
sides of each slit have equal length. Details can be found in [23] and [32]. A related
linear Schwarz-Christoffel problem involving slits in the complex plane is implicit in
[14].
By composing a third conformal map with the first two, we obtain a picture that
is even more revealing than Fig. 1. Figure 2 depicts the image of the slit strip under
the complex exponential:
(z))). The vertical segments now
map onto arcs of the upper half of the unit circle, the slits map onto radial spikes
protruding from that circle, and the infinite horizontal lines map to the portion of the
real axis exterior to the circle. The real axis is shown dashed, because we immediately
reflect across it to get a complete picture.
By the composition \Phi(z) of three conformal maps, we have transplanted the
K-connected exterior of the region E of Fig. 1a to the simply-connected exterior of
the spiked unit disk of Fig. 2. (These connectivities are defined with respect to the
Riemann sphere or the extended complex plane C [ f1g.) The Green's function for
E is given by the extraordinarily simple formula
Have we really mapped a K-connected region conformally onto a simply-connected
region? No, this is is not possible, and to resolve what looks like a contradiction
we must think more carefully about reflections. Suppose in Fig. 1a we think of
GREEN'S FUNCTIONS FOR MULTIPLY CONNECTED DOMAINS 5
the finite dashed intervals as branch cuts not to be crossed, and reflect only across
the semi-infinite dashed intervals at the ends. Then the complement of E becomes
simply-connected, and we have indeed constructed a conformal map onto the simply-connected
region of Fig. 2. However, the Schwarz reflection principle permits reflection
across arbitrary straight lines or circular arcs. There is no reason why one should exclude
the finite intervals in Fig. 1a as candidates, which would correspond in the
w-plane to reflection in the protruding spikes of Fig. 2. When such reflections are
allowed, \Phi(z) becomes a multi-valued function whose values depend on paths in the
complex plane-or equivalently, a single-valued conformal map of Riemann surfaces.
Even under arbitrary reflections with arbitrary multi-valuedness, fortunately, equation
(3) remains valid, since all reflections preserve the absolute value j\Phi(z)j and g(z)
depends only on this absolute value. Therefore, for the purpose of calculating Green's
functions, we escape the topological subtleties of the conformal mapping problem.
The phenomenon of multivaluedness is a familiar one in complex analysis. An
analysis of the multivalued function \Phi(z) is the basis of Widom's approximation theoretic
results in [32], and earlier discussions of the same function can be found, for
example, in [30] and [31].
3. Computed example; electrostatic interpretation. Our first computed
example is presented in detail to illustrate our methods. The region E of Fig. 3(a)
has polygons, a red hexagon and a green square. (The hexagon is defined by
coordinates \Gamma6:5, \Gamma5 \Sigma 1:5i, \Gamma5:75 \Sigma 2:25i, \Gamma8 and the square by coordinates 9:5,
8:75 \Sigma 0:75i, 8.) In Fig. 3(b), three subsets of the real axis have been introduced, blue
and turquoise and magenta, to complete the boundary of the half-planar region. Plots
(c) and (d) show the conformal images of this region as a slit strip and the exterior
of the disk with a spike. The color coding is maintained to indicate which boundary
segments map to which.
All of these computations, like those in our later examples, have been carried out
with the high accuracy that comes cheaply in Schwarz-Christoffel mapping [26]. Thus
our figures can be regarded as exact for plotting purposes. For the sake of those who
may wish to duplicate some of these computations, in the sections below we report
occasional numbers, which are believed in each case to be correct to all digits listed.
Green's functions have a physical interpretation in terms of two-dimensional electric
charge distributions, that is, cross-sections of infinite parallel line charge distributions
in three dimensions. In Fig. 3(d), the equilibrium distribution of one (negative)
unit of charge along the unit circle is the uniform distribution, which generates the
associated potential log jwj. By conformal transplantation under the map
maps to a non-uniform distribution along the boundaries of the
polygons P j in the z-plane. This nonuniform charge distribution on the polygons P j
is precisely the minimal-energy, equilibrium charge distribution for these sets. It is
the charge distribution that would be achieved if each polygon were an electrical conductor
connected to the other polygons by wires in another dimension so as to put
them all at the same voltage. Mathematically, the charge distribution is distinguished
by the special property that it generates the potential g(z) with constant value on the
boundaries of the polygons.
4. Asymptotic convergence factor, harmonic measure, and capacity.
Every geometrical detail of Fig. 3 has a mathematical interpretation for the Green's
function problem, which becomes a physical interpretation if we think in terms of
equilibrium charge distributions. We now describe several items that are particularly
important.
6 MARK EMBREE AND LLOYD N. TREFETHEN
(a) Problem domain, showing the computed
critical level curve c as well as one
lower and one higher level curve.
z-plane
(b) To obtain these results, first the real axis
is drawn in as an artificial boundary. Heavy
lines mark the boundary of the new simply-connected
problem domain.
(c) The half-planar region is then transplanted
by a composition of two Schwarz-Christoffel
maps to a slit semi-infinite strip. The real interval
between the polygons (turquoise) maps
to a horizontal slit whose coordinates are determined
as part of the solution. Vertical lines
in the strip correspond to level curves of the
Green's function of the original problem.
(d) Finally, the exponential function maps the
strip to the upper half of the exterior of the
unit disk. The slit becomes a protruding turquoise
spike. Here the Green's function is
log jwj, with concentric circles as level curves.
Reflection extends the circles to the lower half-
plane, and following the maps in reverse produces
the curves of (a).
w-plane
Figure
3. Color-coded computed illustration of our algorithm for an example with
polygons. The blue, red, turquoise, green, and magenta boundary segments
in the various domains correspond under conformal maps. Fainter lines distinguish
function values obtained by reflection.
GREEN'S FUNCTIONS FOR MULTIPLY CONNECTED DOMAINS 7
Critical point, potential, and level curve. For sufficiently small ffl ? 0, the region
of CnE where g(z) ! ffl consists of K disjoint open sets surrounding the polygons P j .
At some value g c , two of these sets first coalesce at a point z c 2 R, which will be
a saddle point of g(z), i.e., a point where the gradient of g(z) and also the complex
derivative \Phi 0 (z) are zero [30]. We call z c the critical point, g c the critical potential,
and c g the critical level curve. (We speak as if z c is a single point
and just two sets coalesce there, which is the generic situation, but in special cases
there may be more than one critical point and more than two coalescing regions, as
in Fig. 7 below.)
These critical quantities can be immediately obtained from the geometry of our
conformal mapping problem. Let w c denote the endpoint of the shortest protruding
spike as in Figs. 2 or 3(d). Then z is the index of
the critical point a j as in Figure 1), g c = log(jw c j), and the critical level curve is the
pre-image under \Phi of the circle j. For the example of Fig. 3, z
0:634942, and the critical level curve is plotted in Fig. 3(a).
Asymptotic convergence factor. In applications to polynomial approximation, as
described in Section 6, the absolute value of the end of the shortest spike is of particular
interest. With the same notation as above, we define the asymptotic convergence factor
associated with g(z) by
For the example of Fig. 3, ae = 0:529966.
Note that g c and ae depend on the shape of the domain E, but not on its scale.
Doubling the sizes of the polygons and the distances between them, for example, does
not change these quantities. They are also invariant with respect to translation of the
set E in the complex plane.
Harmonic measure. Another scale-independent quantity is the proportion - j of
the total charge on each polygon P j , which is known as the harmonic measure of P j
(with respect to the point This quantity is equal to - \Gamma1 times the
distance between the appropriate two slits in the strip domain (or a slit and one of
the semi-infinite boundary lines), or equivalently to - \Gamma1 times the angle between two
spikes (or a spike and the real axis) in the w-plane. In the notation of Fig. 1,
For the example of Fig. 3, the slit is at height Im oe 2
1:290334, and dividing by -
shows that the proportion of charge on the green square is 0:410726. The density
of charge at particular points along the boundary is equal to -
a number that is easy to evaluate since the Schwarz-Christoffel formula expresses \Phi(z)
in terms of integrals. (This density can be used to define the harmonic measure of
arbitrary measurable subsets of the boundary of E, not just of the boundary of P j .)
Capacity. The capacity C (= logarithmic capacity, also called the transfinite
diameter) of a compact set E ae C is a standard notion in complex analysis and approximation
theory [1,13]. This scale-dependent number can be defined informally as
the average distance between charges, in the geometric-mean sense, for an equilibrium
charge density distribution on the boundary of E. Familiar special cases are
for a disk of radius R and for an interval of length L. For a general domain
E, C is equal to the derivative dz=dw evaluated at
(Normally one would have absolute values, but for our problem \Phi 0 (1) is real and
positive.)
One way to compute C is to note that \Phi(z) is the composition of f 1 (z) and
exp(f 2 (z)), in the notation of Figs. 1 and 2, and f 0(1) is just the multiplicative
constant of the first of our two Schwarz-Christoffel maps. Thus the crucial quantity
to determine is the limit of z= exp(f 2 (z)) as z !1, whose logarithm is given by
lim
Z z
f 0(i) di
dz
since f 2 (t K This is a convergent integral of Schwarz-Christoffel type that can
be evaluated accurately by numerical methods related to those of SCPACK and the
Toolbox.
Alternatively, we have found that sufficient accuracy can be achieved without the
explicit manipulation of integrals. Using the Schwarz-Christoffel maps, we calculate
the quantities
for a collection of values of z such as 15. The function C(z) is
analytic at z = 1, and the capacity can be obtained in a standard manner
by Richardson extrapolation. For the example of Fig. 3,
The ideas of this section can be spelled out more fully in formulas, generally
integrals or double integrals involving the charge density distribution; see [13,21,29].
We omit these details here.
5. Further examples. Figures 4-7 present computed examples with
and 5 polygons. In each case, the critical level curve of g(z) has been plotted together
with three level curves outside the critical one. In the case of Fig. 6, a fifth level curve
has also been plotted that corresponds to the highest of the three saddle points of
g(z) for that problem. If the small square on the right in that figure were not present,
then by symmetry, there would be two saddle points between the long quadrilaterals
at the same value of g(z). The square, however, breaks the symmetry, moving those
saddle points to the slightly distinct levels
shown).
Figure
7 may puzzle the reader. Why does the critical level curve self-intersect at
four points, indicating four saddles at exactly the same level, even though there is no
left-right symmetry in the figure? The answer is that the coordinates of the squares in
this example have been adjusted to make this happen. The widths of the squares are
1, 2, 3, 4, and 5, with the left-hand edges of the first two located at
This gave us a system of three nonlinear equations in three unknowns to solve for the
locations of the remaining three left-hand edges that would achieve the uniform critical
value. (This is an example of a generalized Schwarz-Christoffel parameter problem, in
which geometric constraints from various domains are mixed [27].) The locations that
satisfy the conditions are 10:948290, 20:326250, and 31:191359, the critical potential
value is 0:0698122, and the capacity is
6. Applications to polynomial approximation. Many uses of Green's functions
pertain to problems of polynomial approximation. The basis of this connection
is an elementary fact: if
log and thus the
size of a polynomial p(z) is essentially the same as the value of the potential generated
GREEN'S FUNCTIONS FOR MULTIPLY CONNECTED DOMAINS 9
Figure
4. Green's function for a region defined by two polygons. This computation
is identical in structure to that of Fig. 3.
Figure
5. Green's function for a region defined by three degenerate polygons with
empty interior. As it is exteriors that are conformally mapped, the degeneracy has no
effect on the mathematical problem or the method of solution.
by "point charges" with potentials log located at its roots fz j g. In the limit
as the number of roots and charges goes to 1, one obtains a continuous problem such
as (1). Generally speaking, the properties of optimal degree-n polynomials for various
approximation problems can typically be determined to leading order as n !1 from
the Green's function in the sense that we get the exponential factors right but not the
algebraic ones. Numerous results in this vein are set forth in the treatise of Walsh
Figure
6. Green's function for a region defined by four polygons. The square on the
side breaks the symmetry.
Figure
7. Green's function for a region defined by five polygons. The spacing of the
squares has been adjusted to make all the critical points lie at the same value g c .
[30].
Perhaps the simplest approximation topic one might consider is the Chebyshev
polynomials fTng associated with a compact set E ' C. For each n, defined
as the monic polynomial of degree n that minimizes (z)j. The
following result indicates one of the connections between Tn and the Green's function
for E.
Theorem 1. Let E ' C be a compact set with capacity C. Then a unique
Chebyshev polynomial exists for each n - 0, and
lim
It follows from this theorem that the numerical methods of this paper enable us to
determine the leading order behavior of Chebyshev polynomials for polygons symmetrically
located on the real axis. For example, the nth Chebyshev polynomial of
the five-square region of Fig. 7 has size approximately (10:292969) n . Other related
matters, such as generalized Faber polynomials [32], can also be pursued.
GREEN'S FUNCTIONS FOR MULTIPLY CONNECTED DOMAINS 11
Theorem 1 is due to Szeg-o [25], who extended earlier work of Fekete; a proof can
be found for example in [29]. For the case in which E is a smooth Jordan domain,
Faber showed that in fact consists of two intervals,
Akhiezer showed that kTnk=C n oscillates between two constants, and the starting
point of the paper of Widom [32] is the generalization of this result to a broad class
of sets E with multiple components.
Instead of discussing Chebyshev polynomials further, we shall consider a different,
related approximation problem investigated by Walsh, Russell, and Fuchs, among
others [8,9,30,31]. Let h 1
K be entire functions, i.e., each h j is analytic
throughout the complex plane, and to keep the formulations simple, assume that
these functions are distinct. The following is a special case of the general complex
Chebyshev approximation problem:
Polynomial Approximation Problem. Given n, find a polynomial
pn of degree n that minimizes the quantity
Note that we are concerned here with simultaneous approximation of distinct functions
on disjoint sets by a single polynomial. The approximations are measured only
on the polygons nothing is required in the "don't care'' space in-between. For
digital filtering, the polygons would typically be intervals corresponding to pass and
stop bands, and for matrix iterations, they would be regions approximately enclosing
various components of the spectrum or pseudospectra of the matrix.
According to results of approximation theory going back to Chebyshev, there
exists a polynomial pn that minimizes (7), and it is unique [2,3,30]. What is interesting
is how much about pn can be inferred from the Green's function. We summarize two
of the known facts about this problem as follows:
Theorem 2. Let fpng and fEng be the optimal polynomials and corresponding
errors for the Polynomial Approximation Problem, let g be the Green's function, and
let the critical level curve and the asymptotic convergence factor ae be defined as in
Section 4. Then
(a) lim sup n!1 E 1=n
(b) ("Overconvergence") pn (z) ! h j (z) as n !1, not only for z 2 P j , but for
all z in the region enclosed by the component of the critical level curve enclosing P j ,
with uniform convergence on compact subsets. Conversely, pn (z) does not converge
uniformly to h j (z) in any neighborhood of any point on the critical level curve.
These results are due in important measure to Walsh, and are proved in his
treatise [30]; see Theorems 4.5-4.7 and 4.11 and the discussions surrounding them.
Some of this material was presented earlier in a 1934 paper by Walsh and Helen G.
Russell [31], which attributes previous related work to Faber, Bernstein, M. Riesz,
Fej'er, and Szeg-o. The formulations as we have stated them are not very sharp. The
original results of Walsh are more quantitative, and they were sharpened further by
Fuchs, especially for the case in which E is a collection of intervals [8,9].
Theorem 2 concerns the exact optimal polynomials for the Polynomial Approximation
Problem, which are usually unknown and difficult to compute. Walsh showed
that the same conclusions apply more generally, however, to any sequence of polynomials
that is maximally convergent, which means, any sequence fpng whose errors
fEng as defined by (7) satisfy condition (a) of Theorem 2. Now then, how can we
construct maximally convergent sequences? Further results of Walsh establish that
this can be done via interpolation in suitably distributed points:
Theorem 3. Consider a sequence of sets of
either
lying in E or converging uniformly to E as n !1, and suppose that the potential they
generate in the sense of Section 4 converges uniformly to the Green's function g(z) on
all compact subsets disjoint from E. Let fpng be the sequence of polynomials of degrees
generated by interpolation in these points of a function h(z) defined in
C with in a neighborhood of each P j . This sequence of polynomials is
maximally convergent for the Polynomial Approximation Problem.
Theorem 4. The overconvergence result of Theorem 2(b) applies to any sequence
fpng of maximally convergent polynomials for the Polynomial Approximation Problem.
For proofs see Theorems 4.11 and 7.2 of [30] and the discussions nearby.
Theorem 3 implies that once the Green's function g(z) is known, it can be used to
construct maximally convergent polynomials by a variety of methods. The simplest
approach is to take pn to be the polynomial defined by interpolation of h j in the
pre-images along the boundary of P j of roots of unity in the w-plane:
Alternatively, and perhaps slightly more effective in practice, we may adjust the points
along the boundary of each polygon P j . Given n, we determine by (8) and (9) the
number n j of interpolation points that will lie on the boundary of P j . If ' and ' are
the lower and upper edge angles along the unit circle in the w-plane corresponding
to P j (in the notation of Fig. 1, then we define the actual
interpolation points along the boundary of P j by (8) and
Both of the choices (9) and (10) lead to maximal convergence as in Theorem 3.
Figure
8 illustrates the ideas of Theorems 2 and 3, especially the phenomenon of
overconvergence. Here we continue with the same geometry as in Fig. 3 and construct
near-best approximations pn (z) by interpolation of the constants \Gamma1 on the hexagon
and +1 on the square in the points described by (8) and (9). These two constants
represent distinct entire functions, so the polynomials fpn (z)g cannot converge glob-
ally. They converge on regions much larger than the polygons themselves, however, as
the figure vividly demonstrates: all the way out to the critical "figure-8" level curve,
in keeping with Theorem 2. The colors correspond to just the real part of pn (z), but
the imaginary part (not shown) looks similar, taking values close to zero inside the
figure-8 and growing approximately exponentially outside.
Our final example, motivated by the work of Mitchell, Shen and Strang on digital
filters, takes a special case in which E consists of two real intervals. Consider the
approximation problem defined by a "stop band" P 1
and a "pass band" P 2
That is, the problem is to find
polynomials pn of degree n that minimize
ae
oe
GREEN'S FUNCTIONS FOR MULTIPLY CONNECTED DOMAINS 13
[ The original image
is of higher quality,
and is attached at
the end of the paper. ]
Figure
8. Illustration of the overconvergence phenomenon of Theorem 2(b) and
Theorem 4. On the same two-polygon region as in Fig. 3, a polynomial p(z) is sought
that approximates the values \Gamma1 on the hexagon and +1 on the square. For this
figure, p is taken as the degree-29 near-best approximation defined by interpolation in
pre-images of roots of unity in the unit circle under the conformal map
(eqs. (8) and (9)); a similar plot for the exactly optimal polynomial would not look
much different. The figure shows Rep(z) by a blue-red color scale together with the
polygons, the interpolation points, and the figure-8 shaped critical level curve of the
Green's function. Not just on the polygons themselves, but throughout the two lobes
of the figure-8, Rep(z) comes close to the constant values \Gamma1 and +1. Outside, it
grows very fast.
Our Schwarz-Christoffel computations (elementary, since the more difficult first map
f 1 of Fig. 1 is the identity in this case) show that the asymptotic convergence factor
0:947963, the capacity is 0:499287, the critical point and level are z
\Gamma0:350500 and g c = 0:053440, and the harmonic measures are -
For Fig. 9 plots the near-best polynomial pn defined by interpolation
in the points defined by (8) and (10). The polynomial has approximately equiripple
form, suggesting that it is close to optimal. The horizontal dashed lines suggest the
error in this approximation, but it is clear they do not exactly touch the maximal-
error points of the curve. In fact, these dashed lines are drawn at distances \Sigmaae
from the line to be approximated, where ae is the asymptotic convergence factor; the
adjustment by p
n is suggested by the theorems of Fuchs [8]. In other words, these
lines mark a predicted error based on the Green's function, not the actual error of the
polynomial approximation obtained from it.
Figure
shows the actual optimal polynomial for this approximation problem,
with equiripple behavior. Something looks wrong here-the errors seem bigger than
14 MARK EMBREE AND LLOYD N. TREFETHEN
Figure
9. The near-best polynomial p 19
(x) obtained from the Green's function by
interpolation in the 20 points (8), (10) of 0 in the stop band [\Gamma1; \Gamma0:4] and 1 in the
pass band [\Gamma0:3; 1]. The polynomial is not optimal, but it is close.
Figure
10. Same as Fig. 9, but for the optimal polynomial p 19
computed by the
Remes algorithm. At first glance, the approximation looks worse. In fact, it is better,
since there are large errors in Fig. 9 at the inner edges of the stop and pass bands.
GREEN'S FUNCTIONS FOR MULTIPLY CONNECTED DOMAINS 15
-0.3506
proportion of interpolation
points in the stop band
position of critical point
Figure
11. Comparison of Green's function predictions (solid curves) with exact
equiripple approximations (dots) for the example (11). Details in the text.
in Fig. 9, not smaller! In fact, Fig. 9 is not as good as it looks. At the right edge of
the stop band and at the left edge of the pass band, for x - \Gamma0:4 and x - \Gamma0:3, there
are large errors. The numerical results line up as follows:
Optimal error En : 0:1176
n estimated from Green's function: 0:0831
in polynomial obtained from Green's function: 0:2030
In some engineering applications, of course, Fig. 9 might represent a better filter than
Fig. 10 after all.
Figure
presents three comparisons between properties of the exactly optimal
polynomials pn (x) for this problem (solid dots) and predictions based on the Green's
function (curves). Plot (a) compares the error En with the prediction ae
(the
distances between the horizontal dashed lines in Figs. 9 and 10). Evidently these
quantities differ by a factor of less than 2. Plot (b) compares the proportion of the
interpolation points that lie in the stop band with the harmonic measure - 1 . The
agreement is as good as one could hope for. Finally, plot (c) compares the point x in
[\Gamma1; 1] at which the optimal polynomial satisfies vertical dashed line
of Fig. 10) with the critical point z c (the vertical dashed line of Fig. 9). Evidently
the Green's function makes a good prediction of this transition point for finite n and
exactly the right prediction as n !1, as it must by Theorem 2(b).
7. Weighted Green's functions for weighted approximation. In signal
processing applications, rather than a uniform approximation, one commonly wants
an approximation corresponding to errors weighted by different constants W j in different
In closing we note that the techniques we have described can be
generalized to this case by considering a weighted Green's function in which (1b) is
replaced by the condition
which depends on n. The function g can now be determined by a conformal map
onto a semi-infinite strip whose end is jagged, with the K segments lying at real parts
\Gamman \Gamma1 log W j . Numerical experiments show that this method is effective, and very
general theoretical developments along these lines are described in the treatise of Saff
and Totik [21].
--R
Topics in Geometric Function Theory
Theory of Approximation
Introduction to Approximation Theory
Algorithm 756: A MATLAB toolbox for Schwarz-Christoffel mapping
From potential theory to matrix iterations in six steps
Polynomial Based Iteration Methods for Symmetric Linear Systems
Topics in the Theory of Functions of One Complex Variable
On the degree of Chebyshev approximation on sets with several components
On Chebyshev approximation on several disjoint intervals
Lectures on Complex Approximation
Iterative Methods for Solving Linear Systems
Applied and Computational Complex Analysis
Analytic Function Theory
The small dispersion limit of the Korteweg-de Vries equation
of Elect.
Digital Filter Design
Chebyshev approximation for nonrecursive digital filters with linear phase
Theory and Applications of Digital Signal Processing
Iterative Methods for Sparse Linear Systems
Logarithmic Potentials with External Fields
The asymptotics of optimal (equiripple) filters
The potential theory of several intervals and its applications
Cambridge U.
Bermerkungen zu einer Arbeit von Herrn M.
Numerical computation of the Schwarz-Christoffel transformation
Analysis and design of polygonal resistors by conformal mapping
Potential Theory in Modern Function Theory
Interpolation and Approximation by Rational Functions in the Complex Domain
On the convergence and overconvergence of sequences of polynomials of best simultaneous approximation to several functions analytic in distinct regions
Extremal polynomials associated with a system of curves in the complex plane
--TR
--CTR
Tobin A. Driscoll, Algorithm 843: Improvements to the Schwarz--Christoffel toolbox for MATLAB, ACM Transactions on Mathematical Software (TOMS), v.31 n.2, p.239-251, June 2005 | krylov subspace iteration;chebyshev polynomial;potential theory;conformal mapping;polynomial approximation;schwarz-christoffel formula;digital filter;green's function |
341024 | Bounds on the Extreme Eigenvalues of Real Symmetric Toeplitz Matrices. | We derive upper and lower bounds on the smallest and largest eigenvalues, respectively, of real symmetric Toeplitz matrices. The bounds are first obtained for positive-definite matrices and then extended to the general real symmetric case. They are computed as the roots of rational and polynomial approximations to spectral, or secular, equations for the symmetric and antisymmetric parts of the spectrum; this leads to separate bounds on the even and odd eigenvalues. We also present numerical results. | Introduction
The study of eigenvalues of Toeplitz matrices continues to be of interest, due to the
occurrence of these matrices in a host of applications (see [4] for a good overview) including
linear prediction, a well-known problem in digital signal processing.
In this work we present improved bounds on the extreme eigenvalues of real symmetric
positive-definite Toeplitz matrices and describe their extension to matrices that are not
positive-definite. The computation of the smallest eigenvalue of such matrices was considered
in, e.g., [8], [16] and [19], whereas bounds were studied in [10], [14] and [22]. Among
the latter, the best bounds were obtained in [10]. Our approach is similar to the one used
in [8], [10], [19] and in [23], where it serves as a basis for computing other eigenvalues
as well. In this approach, the eigenvalues of the matrix are computed as the roots of
a one-dimensional rational function. The extreme eigenvalues can then be bounded by
computing bounds on the roots of the aforementioned equation, often called a spectral
equation, or secular equation (see [12]).
In [10] the bounds are obtained by using a Taylor series expansion for the secular
equation. We propose to improve this in two ways, first of all by considering "better"
secular equations (of a similar rational nature) and, secondly, by considering rational approximations
to the secular function, rather than a Taylor series, which is an inappropriate
On leave from Ben-Gurion University, Beer-Sheva, Israel.
approximation for a rational function. As an added advantage of our different equations,
we obtain separate bounds on the even and odd eigenvalues.
To put matters in perspective, we note that these "better" equations are hinted at in
[10] without being explixitly stated and they also appear in [9] in an equivalent form that
is less suitable for computation. No applications of these equations were considered in
either paper. In [16], an equation such as one of ours is derived in a different way which
does not take into account the spectral structure of the submatrices of the matrix, thereby
obscuring key properties of the equation. It is used there to compute the smallest even
eigenvalue and it too uses polynomial approximations.
The idea of a rational approximation for secular equations is not new. In a different
context, it was already used in, e.g., [5] and many other references, the most relevant to
this work being [19]. However, apparently because of the somewhat complicated nature of
their analysis, it seems that these rational approximations are rarely considered beyond
the first order. We consider a different approach that enables us to consider higher order
rational approximations, which we prove to be better than polynomial ones. To our knowl-
edge, the equations for the even and odd spectra have not been combined with rational
approximations to compute bounds and the resulting improvement is quite significant.
The paper is organized as follows. Section 1 contains definitions, a brief overview of
the properties of Toeplitz matrices and basic results for a class of rational functions. In
Section 2 we develop spectral equations and in Section 3 we consider the approximations
which lead, in Section 4, to the bounds on the extreme eigenvalues. Finally, we present
numerical results in Section 5. In Sections 2 and 3, we have included a summary of parts
of [21], to improve readability and to make the paper as self-contained as possible.
The identity matrix is denoted by I throughout this paper, without specifically indicating
its dimension, which is assumed to be clear from the context.
Preliminaries
A symmetric matrix T 2 IR (n;n) is said to be Toeplitz if its elements
for some vector Many early results about such matrices can be
found in, e.g., [3], [6] and [9].
Toeplitz matrices are persymmetric, i.e., they are symmetric about their southwest-
northeast diagonal. For such a matrix T , this is the same as requiring that JT T
where J is a matrix with ones on its southwest-northeast diagonal and zeros everywhere
else (the n \Theta n exchange matrix). It is easy to see that the inverse of a persymmetric
matrix is also persymmetric. A matrix that is both symmetric and persymmetric is called
doubly symmetric.
A symmetric vector v is defined as a vector satisfying and an antisymmetric
vector w as one that satisfies Jw = \Gammaw. If these vectors are eigenvectors, then their
associated eigenvalues are called even and odd, respectively. It was shown in [6] that,
given a real symmetric Toeplitz matrix T of order n, there exists an orthonormal basis
for composed of n \Gamma bn=2c symmetric and bn=2c antisymmetric eigenvectors of T ,
where bffc denotes the integral part of ff. In the case of simple eigenvalues, this is easy to
see from the fact that, if
Therefore u and Ju must be proportional, and therefore u must be an eigenvector of J ,
which means that either \Gammau. Finally, we note that for ff 2 IR, the matrix
\Gamma ffI) is symmetric and Toeplitz, whenever T is.
We now state two lemma's, the proofs of which can be found in [20]. They concern
the relation between a class of rational functions, which will be considered later, and their
approximations.
Lemma 2.1 Let g(-) be a strictly positive and twice continuously differentiable real func-
tion, defined on some interval K ae IR. With fl a nonzero integer, consider the real function
of -: a(b \Gamma -) fl , where the parameters a and b are such that it interpolates g up to first
order at a point - 2 K with
is positive (negative) for all - 2 K, then for all - such that a(b \Gamma -) fl - 0, the interpolant
lies below (above) the function g(-). 2
Lemma 2.2 The function
with m a positive and fl a nonzero integer and the ff j 's nonnegative, satisfies
for all - such that 8j :
3 Spectral equations
In this section we derive various spectral, or "secular", equations for the eigenvalues of a
real symmetric Toeplitz matrix. Several of these results are not new, even though some
were not explicitly stated elsewhere or appear in a form less suitable for computation.
Let us consider the following partition of a symmetric Toeplitz matrix
by the vector . Then the following well-known theorem (see, e.g., [8]) holds:
Theorem 3.1 The eigenvalues of T that are not shared with those eigenvalues of Q, whose
associated eigenspaces are not entirely contained in ftg ? , are given by the solutions of the
We define the function OE(-) by
Equation (1) is equivalent to
are the p eigenvalues of Q for which the associated eigenspace U
is not
entirely contained in the subspace ftg ? , i.e., for which U ! i
details). Denote the orthonormal vectors which form a basis for U
by fu (i)
is the dimension of U
. Then the scalars c 2
are given by c 2
. The rational
function in (1), or (3), has p simple poles, dividing the real axis into
each of which it is monotonely increasing from \Gamma1 to +1. The solutions f- j g p+1
of equation (3) therefore satisfy
i.e., the eigenvalues ! i strictly interlace the eigenvalues - j , which is known as Cauchy's
interlacing theorem. These results are well-known and we refer to, e.g., [8], [10] and [23].
A positive-definite matrix T will therefore certainly have an eigenvalue in the interval
An upper bound can then be found by approximating the function in (1) at
in such a way that the approximation always lies below that function, and by subsequently
computing the root of this approximation. This is the approach used in [10], where the
approximations are the Taylor polynomials. However, such polynomials are inadequate
for rational functions and we shall return to this matter after deriving additional spectral
equations.
It would appear that our previous partition of T is inappropriate, given the persym-
metry of Toeplitz matrices. We therefore consider the following, more natural, partition
for a matrix that is both symmetric and persymmetric:
G is an (n \Gamma 2) \Theta (n \Gamma 2) symmetric Toeplitz matrix, generated
by the vector This partition is also used in Theorem 4 of [10], but no use
was made of this partition in the computation or bounding of eigenvalues in any of the
aforementioned references. In what follows, we denote the even and odd eigenvalues of
T by - e
, and the even and odd eigenvalues of G by - i and - i , respectively. We
then have the following theorem, which yields two equations: one for even and one for odd
eigenvalues of T .
Theorem 3.2 The even eigenvalues of T that are not shared with those even eigenvalues
of G, whose associated eigenspaces are not entirely contained in f ~ t g ? , are the solutions of
whereas the odd eigenvalues of T that are not shared with those odd eigenvalues of G,
whose associated eigenspaces are not entirely contained in f ~ t g ? , are the solutions of
Proof. The proof is based on finding the conditions under which has a
nontrivial solution for x. These conditions take the form of a factorable equation, which
then leads directly to equations (4) and (5). For more details, we refer to [21]. 2
To gain a better understanding of equations (4) and (5), let us assume for a moment
that all eigenvalues of G are simple (the general case does not differ substantially) and
denote an orthonormal basis of IR n\Gamma2 , composed of orthonormal eigenvalues of G, by
are symmetric
eigenvector - even eigenvalue pairs and (w
eigenvalue pairs. With ~
means that
r
a
s
r
a
s
Once more exploiting the orthonormality of the eigenvectors yields
r
a 2
Analogously we obtain
s
Equations (4) and (5) now become
r
a 2
s
which shows that the rational functions in each of equations (4) and (5) are of the same
form as the function in (1). It is also clear that T will certainly have an even eigenvalue
on (0; - 1 ) and an odd one on (0; - 1 ). These equations were also hinted at in [10] without
however deriving or stating them in an explicit way. The meaning of Theorem 3.2 is
therefore that those even and odd eigenvalues of G, whose associated eigenspaces are not
completely contained in f ~ t g ? , interlace, respectively, the even and odd eigenvalues of T
that are not shared with those eigenvalues of G. This result was obtained in [9] in a
different way, along with equivalent forms of equations (6) and (7). However, the use of
determinants there makes them less suitable for applications.
Finally, because of the orthonormality of the eigenvectors, equations (4) and (5) can
be written in a more symmetric way, as shown in the following two equations, which at
the same time define the functions OE e (-) and OE
We note that equation (8) was also obtained in [16], where it was used to compute the
smallest eigenvalue which was known in advance to be even. However, the derivation of
the equation is quite different, concentrating exclusively on the smallest eigenvalue and
disregarding the spectral structure of the submatrices of T , which obscures important
properties of that equation.
To evaluate the functions OE(-), OE e (-) and OE o (-) and their derivatives, as we will need
to do later on, we need to compute expressions of the form s T S \Gammak s for a positive integer
k, where S is the real symmetric Toeplitz matrix defined by (s
In this work, we will use the Levinson-Durbin algorithm, abbreviated as
LDA. The original references for this algorithm are [11] and [18], but an excellent overview
of this and other Toeplitz-related algorithms can be found in [13]. Let us start with
In this case we have to solve \Gammas, where the minus sign in the right-hand side is by
convention. This system of linear equations is called the Yule-Walker (YW) system and
the LDA solves this problem recursively in 2n 2 flops, where we define one flop as in [13],
namely a multiplication/division or an addition/subtraction. Because of the persymmetry
of S, once the Yule-Walker equations are solved, the solution to
After solving the YW system, we have obtained
can then be evaluated as kS
2 in O(n) flops. To compute higher order derivatives, we
use a decomposition of S, supplied by the LDA itself in the process of solving the YW
system. Denoting by w (') the solution to the '-th dimensional YW subsystem, obtained
in the course of the LDA algorithm, this decomposition is given by U T
is a diagonal matrix and U is the upper triangular n \Theta n matrix, whose '-th column is
given by (Jw its diagonal elements are equal to one. This result is due
to [7]. We calculate s T S \Gamma3 s as follows:
s
This computation costs flops. To evaluate s T S \Gamma4 s, we compute first S \Gamma2 s as
once again, n 2 +O(n) flops, and then compute kS \Gamma2 sk 2
2 with
an additional O(n) flops. Roughly speaking, we can say that, for each increase of k by
one, we need to execute an additional n 2 flops.
Of course, there are other algorithms such as the fast Toeplitz solvers (see, e.g., [1]
and [2]), and these could be substituted for the LDA. However, this influences only the
complexity of computing our bounds and not the bounds themselves, which are the focus
of this work.
Approximations
As we mentioned before, our bounds will be obtained by the roots of approximations to
the secular equations. In the case of the smallest eigenvalue, those approximations will be
shown to be dominated by the spectral function, so that their root will provide an upper
bound on the smallest eigenvalue. Bounds for the largest eigenvalue will be based on the
bounds for the smallest eigenvalue of a different matrix. In the derivation of the bounds,
we will assume that the matrices are positive-definite, even though a slight modification
suffices to extend our results to general symmetric matrices. All this will be explained in
Section 4.
We shall now construct approximations to our spectral equations. These will be of
two types: rational and polynomial, each of which will be of three kinds: first, second and
third order.
Throughout this section we will consider approximations, obtained by interpolation at
function g of the form
2. Functions of this form occur in
all the equations considered in this paper. We note that g has simple poles at the fi j 's
and is a positive, monotonely increasing convex function (or on the interval
Our results are applicable for interpolation at a point -
different
from zero, simply by translating the origin to that point.
As mentioned before, we consider both rational and polynomial approximations of first,
second and third order and we will denote them, respectively, ae 1 , ae 2 , ae 3 for the rational
ones and - 1 , - 2 , - 3 for the polynomial ones. The polynomial approximations are nothing
but the Taylor polynomials of degree 1,2 and 3. We now define these approximations,
while cautioning that some of the parameters are defined "locally", i.e., the same letter
may have different meanings in different contexts, when no confusion is possible.
(1) First order rational. A function ae 1 (-) 4
ae 0
(0). The coefficients a and b are easily computed to be a
(0). It is not difficult to see that b is a weighted average of the fi j 's. We
therefore have that a ? 0 and and therefore that ae 1 is a positive, monotonely
increasing convex function on (\Gamma1; fi 1 ).
(2) Second order rational. A function ae 2 (-)
ae 0
(0). The coefficients a, b and c are then given by a =
(0)=g 002 (0) and (0). It is clear immediately that
similarly to what we had before, From Lemma 2.2 with
we have that a ? 0 as well. This same lemma also shows that the pole of ae 2 lies
closer to fi 1 than the pole of ae 1 . The approximation ae 2 is therefore positive, monotonely
increasing and convex on (\Gamma1; fi 1 ).
(3) Third order rational. A function ae 3 (-) a=(b\Gamma-)+c=(d\Gamma-) such that ae 3
ae 0
(0). For convenience, let us set
temporarily leave out the argument of g and its derivatives, i.e., g j g(0).
To compute the coefficients of ae 3 , we then have to solve the following system of equations
in a, c, v and w:
From Cramer's rule, we have
The first equation in (15) yields a equations in (15) then
give
By considering c instead of a, the analog of (15) yields
Equations (16) and (17) give, after some algebra:
This means that v and w are the solutions for x of the quadratic equation x
Lemma 2.2 with
vw ? 0, which in turn means that v; w ? 0. As a direct consequence from equations (16),
(17), (18) and (19), we then have that either w ! g 0 =g and v ? g 000 =3g 00 , or vice versa.
This is the same as saying that d ? g=g 0 and b ! 3g 00 =g 000 , or vice versa. In both cases,
using these inequalities in the expressions for a and c show that a; c ? 0. All the above
put together means that ae 3 is a positive, monotonely increasing convex function on the
interval (\Gamma1; minfb; d; fi 1 g). The minimum is, in fact, fi 1 , but this will be shown in the
next theorem. We note therefore, that the smallest pole of ae 3 lies between fi 1 and the pole
of ae 2 .
First-order polynomial. A function - 1 (-) 4
b- such that - 1
(0). The coefficients a and b are easily computed to be a = g(0) and
0 (0), which are all positive. The function - 1 is therefore a linear and increasing function
everywhere.
(5) Second-order polynomial. A function - 2 (-) 4
(0). The coefficients a, b and c are given by a
and and they are all positive. The function - 2 is therefore an increasing and
convex function for - 0.
Third-order polynomial. A function - 3 (-) 4
(0). The coefficients a, b, c and d are
000 (0)=6 and they are all positive. The function
- 3 is therefore an increasing and convex function for - 0.
Theorem 4.1 The following inequalities hold on the interval (0; fi 1
Proof. We first remark that some inequalities will be proved on the larger interval
us begin with inequalities (22). The function ae 1 (-) is a first order rational
approximation to g(-) at It is then immediate from Lemmas 2.1 and 2.2 with
g(-). The linear approximation - 1 (-) to g(-) at is also the
linear approximation to ae 1 (-) at that same point. Since ae 1 (-) is a convex function on
must lie below it on the same interval. This concludes the proof for
For we have that ae 2 (-) j a to second order.
This is the same as saying that b=(c \Gamma -) 2 approximates g 0 (-) up to first order. Lemmas
2.1 and 2.2 with yield that b=(c \Gamma -) 2 - g 0 (-), from which it then follows,
with
doe -
Integrating, we obtain b=(c Adding and subtracting a in the
left-hand side and using the function value interpolation condition concludes the proof for
2. We note that - 2 (-) interpolates both g(-) and ae 2 (-) at to second order.
This means that 2c-
(-) up to first order at
2 is convex on that interval. We then have that 2c-
2 (-) on that same
interval and we can integrate back to obtain the desired inequality for - 2 (0; fi 1 ).
For us first consider the difference
Using equations (16)-(19), it is not hard to show that b and d cannot be equal to fi 1 or
which we excluded. The function h(-) must therefore have m+1 roots.
roots, so that both b and d must lie inside the interval (fi 1
the number of roots to balance out (so that b and d can "destroy" existing roots of g on
that interval). This means that h cannot have other roots on the interval (0; fi 1 ) and must
therefore have the same sign throughout that interval. Since h(-) ! +1 as
1 , we
obtain that ae 3 (- g(-) on (0; fi 1 ). Turning now to - 3 (-), we have, similarly to the case
to third order at which is equivalent
to 2c
3 up to first order at
ae 00
3 (-) is convex on that same interval, which means that on that interval 2c+6d- ae 00
3 (-).
Integrating back twice, we obtain once again our inequality for - 2 (0; fi 1 ).
Let us now consider inequalities (23), starting with the inequality between ae 1 and ae 2 .
We first note that ae 1 approximates ae 2 up to first order. We also have
\Gamma2ae
a
Taking into account that a and b are positive and that c ? easily see that the
last term in the right-hand side of (25) is positive on (\Gamma1; fi 1 ), whereas the sum of the
first two terms is positive because of Lemma 2.2 with
then shows that ae 1 (- ae 2 (-) on (\Gamma1; fi 1 ).
For the inequality between ae 2 and ae 3 , it suffices to note that ae 2 approximates ae 3 up to
second order and that ae 3 is a function of the same form and with the same properties as
g. An argument analogous the one used to prove that ae 2 (- g(-) then also yields that
Inequalities (24) all follow by an analogous argument to the one used to prove the
inequalities between - i (-) after observing that - 1 is the first-order
approximation to - 2 , which itself is the second-order approximation to - 3 . 2
5 Bounds
We now finally derive our bounds on the extreme eigenvalues and we start by considering
the smallest eigenvalue. We first consider matrices that are positive-definite. Upper
bounds are then obtained by computing the roots of the various approximations at
to the secular equations OE(-), OE e (-) and OE o (-), which were defined in (2), (8) and (9). As
we have shown before, all these equations are of the form
where g(-) is of the form defined in (10) and ff 2 IR. Their approximations are obtained
by replacing g(-) with the various approximations that were described in the previous
section.
We first define the following:
where all quantities are as previously defined. Once again, all these functions are of the
same form as the function g, defined in (10).
The bounds obtained by replacing f(-) in OE(-) with ae 1 , ae 2 and ae 3 will be denoted r 1 ,
r 2 and r 3 , respectively. Those bounds obtained by replacing f(-) with - 1 , - 2 and - 3 will
be denoted respectively. As an example, this means that r 1 is the root of
the equation
which is obtained by solving a quadratic equation. To compute r 3 and p 3 , a cubic equation
needs to be solved, which can either be accomplished in closed form, or by an iterative
method such as Newton's method (at negligable cost, compared to the computation of
g(0)).
This general bound-naming procedure is now applied to OE e (-) and OE
the bounds on the smallest even eigenvalue are obtained by approximating f e (-) in OE e (-)
and they will be denoted by r e
3 for the rational approximations and p e
2 and p efor the polynomial approximations. For the odd eigenvalues, approximating f
yields the bounds r
3 and p
3 for the rational and polynomial approximations,
respectively. If one uses the LDA for the computation of the spectral equations and their
derivatives as was discussed in Section 3, then the computational cost for first, second and
third order bounds is 2n 2 +O(n), 3n 2 +O(n) and 4n 2 +O(n) flops, respectively.
One of the advantages of the rational approximations is that, contrary to polynomial
approximations, they always generate bounds that are guaranteed not to exceed the largest
pole of f , f e or f applies, as is obvious from their properties, regardless of
how badly behaved the matrix is. In addition to providing separate bounds on the even
and odd eigenvalues, the approximations to the functions OE e (-) and OE should be more
accurate than those for the function OE(-) since now only roughly half of the terms appear
in the function to be interpolated. There is also the additional benefit that both the
smallest and the largest roots are now farther removed from the nearest singularity in the
equation so that once again an improved approximation can be expected. All this is borne
out by our numerical experiments.
Better upper bounds can be obtained if a positive lower bound is known on the smallest
eigenvalue. The only difference in that case is that the approximations are performed at
that lower bound, rather than at As was mentioned before, all our results can be
aplied in this case to the same spectral equations, but with the origin translated to the
lower bound.
Before presenting numerical comparisons in the next section, we will first establish a
theoretical result. We denote the smallest even and odd eigenvalues of T by - e
min and -
min ,
respectively and its smallest eigenvalue by - min . We note that -
min g.
The theoretical comparisons between the various bounds are then given in the following
theorem:
Theorem 5.1 The upper bounds on the mallest eigenvalue of T satisfy:
Proof. The proof follows immediately from the properties of the approximations that
define the bounds, which were proved in Theorem 4.1. 2
This theorem shows that the bounds, obtained by rational approximations, are always
superior to those obtained by polynomial approximations, which should not be surprising,
as the functions they approximate are themselves rational. It also confirms the intuitive
result that, as the order of the approximations increases, then so does the accuracy of the
bounds. This also means that the bounds obtained in [10] (the best currently available),
which are all based on polynomial approximations and correspond to our
are inferior to those produced by rational approximations.
Let us now consider lower bounds on the largest eigenvalue. The largest eigenvalue of
T can be bounded from below, given an upper bound ffi on it. This can be accomplished by
translating the origin in the spectral equations to ffi, replacing the resulting new variable
by its opposite and multiplying the equation by \Gamma1, thus obtaining the exact same type
of spectral equation for the matrix ffiI \Gamma T , which is always positive definite. Computing
an upper bound on the smallest eigenvalue of this new matrix then leads to the desired
bound on the largest eigenvalue, since - min possible value
for ffi is the Frobenius norm of T , defined as (see [13]):
ijA
which for a Toeplitz matrix can be computed in O(n) flops. An exact analog of Theorem
5.1 holds for the maximal eigenvalues.
To conclude this section we briefly consider the fact that the same procedure for
obtaining bounds for real symmetric postive-definite matrices can be used for general real
symmetric matrices as well, provided that a lower bound on the smallest eigenvalue is
available. Any known lower bound can be used (see, e.g., [10] or [14]), or one could be
obtained by a process where a trial value is iteratively lowered until it falls below the
smallest eigenvalue. Calling such a trial value ffl, Sylvester's law of inertia can then be
applied to the decomposition of (T \Gamma fflI ), which was used in Section 3, to determine its
position relative to the smallest eigenvalue of T . Such a procedure is extensively described
and used in, e.g., [8], [15] and [23], and we refer to these papers for further details.
6 Numerical results
In this section we will test our methods on four classes of positive semi-definite matrices.
For each class and for each of the dimensions we have run 200 experiments
to examine the quality of the bounds on the smallest eigenvalue and 200 separate
experiments doing the same for the maximal eigenvalue. The tables report the average
values (with their standard deviations in parentheses) of the bound to eigenvalue ratio
for the smallest eigenvalue and eigenvalue to bound ratio for the largest eigenvalue. The
closer this ratio is to one, the better the bound. Each entry in the table has a left and
right part, separated by a slash. The left part pertains to the use of OE e and OE o (i.e., the
bound is obtained by taking the minimum of the bounds on the even and odd extreme
eigenvalues), whereas the right part represents the use of OE. The figures represent the
distribution of the ratios among the 200 experiments, with the total range of the ratios
divided into five "bins". The frequency associated with those bins is then graphed versus
their midpoints. We note that the x-axis is scaled differently for each figure to accomodate
the entire range of ratios. The solid line represents the bounds obtained by using OE e and
the dashed line represents the use of OE. The dimension is indicated by n. The
polynomial approximation-based bounds are denoted by "Taylor", followed by the order
of the approximation. Let us now list the four classes of matrices.
(1) CVL matrices. These are matrices defined in [8] (whence their name) as
where n is the dimension of T , - is such that T
These matrices are positive semi-definite of rank two. We generated random matrices of
this kind by taking the value of ' to be uniformly distributed on [0; 1].
(2) KMS matrices. These are the Kac-Murdock-Szeg-o matrices (see [17]), defined as
is the dimension of the matrix. These matrices
are positive definite and are characterized by the fact that their even and odd eigenvalues
lie extremely close together. Random matrices of this kind were generated by taking the
value of - to be uniformly distributed on [0; 1].
(3) UNF matrices. We define UNF matrices by first defining a random vector v of length
whose components are uniformly distributed on [\Gamma10; 10]. We then modify that vector
by adding to its first component 1.1 times the absolute value of the smallest eigenvalue of
the Toeplitz matrix generated by v. Finally, the vector v is normalized by dividing it by
its first component, provided that it is different from zero. The Toeplitz matrix generated
by this normalized vector is then called an UNF matrix. From their construction, these
matrices are positive semi-definite.
matrices. We define NRM matrices exactly like UNF matrices, the only
difference being that the random vector v now has its components normally distributed
with mean and standard deviation equal to 0 and 10, respectively. As in the uniform case,
these matrices are positive semi-definite.
Theoretically, some of the matrices generated in the experiments might be singular,
although we never encountered this situation in practice. A typical distribution of the
spectra (even on top, odd at the bottom) for these four classes of matrices is shown in
Figure
1.
CVL
KMS
UNF
NRM
Figure
1: Typical distribution of the even and odd spectra for the four classes test
matrices (n=200).
The experiments clearly show that exploiting the even and odd spectra yields better
bounds. The magnitude of the improvement diminishes the closer the even and odd
eigenvalues are lying together, as is obviously true for the KMS matrices. The superiority
of rational bounds is also clearly demonstrated, both in their smaller average ratios and
smaller standard deviations. They may yield a ratio of up to three times smaller than
polynomial ones and in many cases, lower-order rational bounds are better than higher-order
polynomial ones. This is especially true for larger matrix dimensions. These results
also confirm our previous remark that the bounds obtained in [10] are inferior to rational
approximation-based bounds.
Although we did not report results on bounds for the even and odd eigenvalues sep-
arately, we did verify that they are virtually identical to those obtained for the smallest
eigenvalue proper.
All experiments were run in MATLAB on a Pentium II 233MHz machine.
We conclude that computing separate, rational approximation-based, bounds on the
even and odd spectra leads to a significant improvement over existing bounds.
Figure
2: Distribution of bound/eigenvalue ratio for the minimal eigenvalue of CVL matrices
with dimension n=100,200,400.
Method Dimension
100 200 400
Table
1: Bound to eigenvalue ratio for the minimal eigenvalue of CVL matrices.
Taylor1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.150
Taylor1.2 1.450
Taylor
Figure
3: Distribution of eigenvalue/bound ratio for the maximal eigenvalue of CVL matrices
with dimension n=100,200,400.
Method Dimension
100 200 400
Table
2: Eigenvalue to bound ratio for the maximal eigenvalue of CVL matrices.
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 250Taylor1 1.1 1.2 1.3 1.4 1.5 1.650
1.1 1.2 1.3 1.4 1.5 1.6 1.750Taylor1 1.05 1.1 1.15 1.2 1.25 1.3 1.3520Rational1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.550Taylor1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 220n=200
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 250Taylor1 1.1 1.2 1.3 1.4 1.5 1.620Rational1 1.1 1.2 1.3 1.4 1.5 1.6 1.750Taylor1 1.05 1.1 1.15 1.2 1.25 1.3 1.3520Rational1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.550Taylor1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 220n=400
Taylor1 1.1 1.2 1.3 1.4 1.5 1.620Rational1 1.1 1.2 1.3 1.4 1.5 1.6 1.750Taylor1 1.05 1.1 1.15 1.2 1.25 1.3 1.3520Rational1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.550Taylor
Figure
4: Distribution of bound/eigenvalue ratio for the minimal eigenvalue of KMS matrices
with dimension n=100,200,400.
Method Dimension
100 200 400
Table
3: Bound to eigenvalue ratio for the minimal eigenvalue of KMS matrices.
Figure
5: Distribution of eigenvalue/bound ratio for the maximal eigenvalue of KMS
matrices with dimension n=100,200,400.
Method Dimension
100 200 400
Table
4: Eigenvalue to bound ratio for the maximal eigenvalue of KMS matrices.
Figure
Distribution of bound/eigenvalue ratio for the minimal eigenvalue of UNF matrices
with dimension n=100,200,400.
Method Dimension
100 200 400
Table
5: Bound to eigenvalue ratio for the minimal eigenvalue of UNF matrices.
2.2 2.4 2.6 2.8 350Taylor1 1.05 1.1 1.15 1.2 1.25 1.3 1.3550Rational1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 350
Taylor1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.850
2.3 2.4 2.550
Taylor1.1 1.2 1.3 1.4 1.5 1.6 1.750
2.3 2.4 2.550
1.1 1.15 1.2 1.25 1.3 1.3550
2.3 2.4 2.550
2.3 2.4 2.550
Taylor1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 1.65 1.750
2.3 2.4 2.550Taylor1.1 1.15 1.2 1.25 1.3 1.35 1.450
2.3 2.4 2.550Taylor
Figure
7: Distribution of eigenvalue/bound ratio for the maximal eigenvalue of UNF
matrices with dimension n=100,200,400.
Method Dimension
100 200 400
Table
Eigenvalue to bound ratio for the maximal eigenvalue of UNF matrices.
Taylor1 1.1 1.2 1.3 1.4 1.5 1.650Rational1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.850
Rational 1
Taylor
Figure
8: Distribution of bound/eigenvalue ratio for the minimal eigenvalue of NRM
matrices with dimension n=100,200,400.
Method Dimension
100 200 400
Table
7: Bound to eigenvalue ratio for the minimal eigenvalue of NRM matrices.
Rational1.4 1.6 1.8 2 2.2 2.4 2.6 2.850Taylor1.1 1.2 1.3 1.4 1.5 1.6 1.750Rational1.4 1.6 1.8 2 2.2 2.4 2.650Taylor1 1.05 1.1 1.15 1.2 1.25 1.3 1.3550Rational1.2 1.4 1.6 1.8 2 2.2 2.4 2.650Taylor1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 1.6550
2.3 2.420Taylor1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 1.6520Rational1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.420Taylor1.1 1.15 1.2 1.25 1.3 1.35 1.420Rational1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.320Taylor
Figure
9: Distribution of eigenvalue/bound ratio for the maximal eigenvalue of NRM
matrices with dimension n=100,200,400.
Method Dimension
100 200 400
Table
8: Eigenvalue to bound ratio for the maximal eigenvalue of NRM matrices.
--R
The generalized Schur algorithm for the superfast solution of Toeplitz systems.
Numerical experience with a superfast Toeplitz solver.
Eigenvectors of certain matrices.
Stability of methods for solving Toeplitz systems of equations.
Eigenvalues and eigenvectors of symmetric centrosymmetric matrices.
The numerical stability of the Levinson-Durbin algortihm for Toeplitz systems of equations
Computing the minimum eigenvalue of a symmetric positive definite Toeplitz matrix.
Spectral properties of finite Toeplitz matrices.
Bounds on the extreme eigenvalues of positive definite Toeplitz matrices.
The fitting of time series model.
Some modified matrix eigenvalue problems.
Matrix Computations.
Simple bounds on the extreme eigenvalues of Toeplitz matrices.
Toeplitz eigensystem solver.
Symmetric solutions and eigenvalue problems of Toeplitz systems.
On the eigenvalues of certain Hermitian forms.
The Wiener RMS (root mean square) error criterion in filter design and prediction.
The minimum eigenvalue of a symmetric positive-definite Toeplitz matrix and rational Hermitian interpolation
A unifying convergence analysis of second-order methods for secular equations
Spectral functions for real symmetric Toeplitz matrices.
A note on the eigenvalues of Hermitian matrices.
Numerical solution of the eigenvalue problem for Hermitian Toeplitz matrices.
--TR | toeplitz matrix;spectral equation;bounds;rational approximation;secular equation;eigenvalues |
341067 | Analysis of Iterative Line Spline Collocation Methods for Elliptic Partial Differential Equations. | In this paper we present the convergence analysis of iterative schemes for solving linear systems resulting from discretizing multidimensional linear second-order elliptic partial differential equations (PDEs) defined in a hyperparallelepiped $\Omega$ and subject to Dirichlet boundary conditions on some faces of $\Omega$ and Neumann on the others, using line cubic spline collocation (LCSC) methods. Specifically, we derive analytic expressions or obtain sharp bounds for the spectral radius of the corresponding Jacobi iteration matrix and from this we determine the convergence ranges and compute the optimal parameters for the extrapolated Jacobi and successive overrelaxation (SOR) methods. Experimental results are also presented. | Introduction
.
We consider the following second order linear elliptic PDE
subject to Dirichlet and/or Neumann boundary conditions
on
of\Omega (2a)
where Bu is u or @u
is a rectangular domain in R k (the space of
real variables) and ff i (! 0), fi i , fl(- 0), f and g are functions of k variables.
If Line Cubic Spline Collocation (LCSC) methods are used to solve (1a),(2a), then
the differential operator is discretized along lines in each direction independently and
then the line discretization stencils are combined into one large linear system. In
Section 2 we briefly describe such discretization procedures. We also present and
discuss the resulting coefficient linear system of collocation equations.
In [5], we were able to formulate and analyze iterative schemes for solving the
LCSC linear systems in the case of Helmholtz problems, with Dirichlet boundary
conditions and constant coefficients, that is
in\Omega (1b)
on
@\Omega (2b)
Work supported in part by NSF awards CCR 9202536 and CDA 9123502, AFOSR award
F 49620-J-0069 and ARPA-ARO award DAAH04-94-6-0010.
y Purdue University, Computer Science Department, West Lafayette, IN 47907
z Authors permanent address: University of Crete, Mathematics Department, 714 09 Heraklion,
GREECE and IACM, FORTH, 711 10 Heraklion, GREECE.
x u denotes @ 2 u=@x 2 . Unfortunately the convergence analysis presented in
[5], as it stands, can not be applied in the case of the presence of Neumann boundary
conditions on some of the faces of \Omega\Gamma In Section 3 we present a convergence analysis
of block Jacobi, Extrapolated Jacobi (EJ) and Successive Overrelaxation (SOR)
schemes for the iterative solution of the collocation equations that arise from discretizing
elliptic problems (1b) subject to Neumann (N) boundary conditions on one
or more (but not on all if
of\Omega and Dirichlet (D) ones on the others.
More specifically analytic expressions or sharp bounds for the spectral radius of the
corresponding block Jacobi iteration matrix are derived and from these we determine
the convergence ranges and compute the optimal parameters for the Extrapolated
Jacobi and SOR methods. Furthermore, based on our analysis, certain conclusions of
significant practical importance are drawn.
Finally, in Section 4 we present the results of various numerical experiments designed
for the verification of the theoretical behavior of the iterative LCSC solvers.
The experiments show very good agreement with the theoretical predictions as regards
the convergence of the iterative method used. Although the theoretical results
presented here hold for the model problem (1b), (2b), our experimental results indicate
that the behavior of the iterative LCSC solvers on the general problem (1a), (2a)
is similar.
2. The Line Cubic Spline Collocation (LCSC) Method.
In this section we briefly describe the LCSC discretization method and introduce
some notation to be used later. We start by introducing one extra point beyond
each end of the intervals [a i ,b i ]. Each of these enlarged intervals is then discretized
uniformly with step size h i by
ae
oe
The tensor product of these discretized lines,
provides an extended
uniform partition of \Omega\Gamma A collocation approximation u \Delta of u in the space S 3;\Delta of
cubic splines in k dimensions is defined by requiring that it satisfies the equation (1)
at all the mesh points of \Delta and the equation (2) on the boundary mesh points. In the
sequel the interior mesh points are denoted by (- 1
nk ), for all n 2
In the line cubic spline collocation methods we consider in this paper the collocation
approximation is made on each set L i of lines parallel to the x i axis. More
specifically, this collocation approximation u i
on each line in L i is represented
as follows
are the B-spline basis defined on \Delta i ,
that (3) is the sum of one dimensional splines in
the x i variable whose coefficients U i are functions of the other
thermore, this approximation is redundant in that there are k choices for representing
one for each coordinate direction.
2.1. The Second Order Line Cubic Spline Collocation Method.
From the assumed representation (3) of u i
\Delta and the nature of the B-spline basis
functions we conclude easily (see also [8] and [5]) that
and
at the mesh (collocation) points P i
on each line of L i . These
points are represented by the vectors
nk
indexes the L i lines.
First, we observe that the collocation equations obtained from the boundary conditions
and the differential equation at the end points of each line can be explicitly
determined as follows
Dirichlet (D) boundary conditions on P iU i
Neumann (N) boundary conditions on P iU i
Dirichlet (D) boundary conditions on P i
Neumann (N) boundary conditions on P i
where for simplicity we have assumed that all boundary conditions are homogeneous.
The rest of the unknowns are determined by solving the so called interior collocation
equations, that is, on those lines of L i not in @
The equations (11) away (2 - from the boundary are given by
\Theta U i
while for lines next to the boundary, they have similar form with appropriately modified
right sides ([5]). The full matrix form of these equations is given in Section 3.
After eliminating the predetermined (from equations (7)-(10)) boundary unknowns,
each collocation approximation u i
coefficients
' . This redundancy is handled by requiring that all these approximations agree on
the mesh points, that is
\Delta on the mesh \Delta
It has been shown ([5], [6], [8]) that the above described method leads to a second
order collocation approximation of u.
2.2. The Fourth Order Line Cubic Spline Collocation Method.
To derive the fourth order LCSC method we use high order approximations of the
derivatives D j
defined as appropriate linear combinations of the spline
interpolant and its derivatives at the mesh points [6]. Specifically, we approximate
the second derivatives in the PDE operator by the difference scheme
The collocation equations corresponding to the mesh points on a line L i away (2 -
from the boundary are written at the point P i
' as
\Theta U i
The redundancy on the coefficients is handled in the same way as in the second order
case, with the only basic difference being that the stencils now have 5, rather than 3
points along each coordinate direction.
3. Iterative Solution of the Interior Line Cubic Spline Collocation
Equations.
The LCSC equations can be written in the form
where the coefficient matrices are defined by
Y
where I denotes a unit matrix of appropriate order and
The matrix H i depends both on the discretization scheme applied and the nature
(Dirichlet (D) or Neumann (N)) of the boundary conditions on the left and right
end-points of the line direction with which the matrix H
Specifically,
ff;fi for the second order discretization and H
ff;fi for the
fourth order one. The matrices T i
are given by
where
(\Gamma2; \Gamma2) in case of (D) conditions on both ends
(\Gamma1; \Gamma2) in case of (N) conditions on the left and
(D) ones on the right end
(\Gamma2; \Gamma1) in case of (D) conditions on the left and
(N) ones on the right end
(\Gamma1; \Gamma1) in case of (N) conditions on both ends
NOTE: Since in the analysis which will follow, the existence of A \Gamma1
1 is a necessary
requirement, we assume, without loss of generality, that on at least one of the faces
of\Omega perpendicular to the x 1 -direction (D) boundary conditions have been imposed
so that the invertibility of A 1 is guaranteed even if
Equation (16), coupled with the conditions (13) leads us to
the order kK system of linear equations which, if we order according to the ordering
of unknowns U 1 (n), is given
\GammaB 3 D 3
where for we have that
Y
The matrices in (18) and (22), may be linear or quadratic functions of the matrix
\Gamma2;\Gamma2 if all the boundary conditions are Dirichlet (D). With some Neumann (N)
conditions present, this is no longer the case. Besides, in the case of the fourth order
discretization scheme, the product T i
ff;fi is not a symmetric matrix in the case where
(ff; fi) 6= (\Gamma2; \Gamma2). So, the analysis presented in Section 3 of [5] does not always apply
to the present case. We present a unified analysis for all cases based on the following
three lemmas.
Lemma 3.1. If A and B are Hermitian positive definite matrices of the same order
then (i) AB (and BA) possesses a complete set of linearly independent eigenvectors.
(ii) Further, let X 2 C n;n and oe(X) ae R, where oe(:) denotes the spectrum of a matrix.
Let also -X , -X;m and -X;M denote any, the smallest and the largest eigenvalue of
X, respectively. Then for the eigenvalues of the matrix AB there holds:
Proof. Let A 1=2 and B 1=2 be the unique square roots of A and B (see Theorem
2.7, page 22 in [10]). The matrices AB and A 1=2 B A
are similar, with (\Delta) H denoting the complex conjugate transpose of a given matrix.
Obviously, the matrix (B 1=2 A 1=2 ) H (B 1=2 A 1=2 ) is Hermitian and positive definite.
From the similarity property and the fact that A 1=2 BA 1=2 is Hermitian the validity
of the assertion in (i) follows. Now, it is easily seen that
which proves the two rightmost inequalities in (23). On the other hand, since
exists and 1
--AB;m
-B;m
\Delta-A;m
which proves that the three leftmost inequalities in (23) hold. 2
NOTE: The assertions of Lemma 3.1 hold even if one of A or B is nonnegative definite.
The only difference is that the two leftmost inequalities in (23) become equalities, that
is Indeed, the proof for the assertion in (i) is the same if
B is singular, while if A is singular one uses the similarity of the matrices AB and
. The proof for the rightmost inequalities in (23) is exactly the same as
before. For the leftmost ones we simply observe that
Lemma 3.2. Consider the real symmetric tridiagonal matrix T ff;fi of order m
given by
where (ff; Its eigenvalues - T ff;fi are given
by the expressions
(i) If (ff;
(ii) If (ff;
(iii) If (ff;
Proof. The results (25i) and (25iii) are well-known in the literature but we give
here a unified way of obtaining all three of them simultaneously. First, we note that
it can be proved that all these matrices are negative definite, with the exception of
T \Gamma1;\Gamma1 which is non-positive definite. We have ae(T ff;fi ) -k T ff;fi k1= 4 and since it
can be checked that \Gamma4 62 oe(T ff;fi ) we conclude that oe(T ff;fi) ae (\Gamma4; 0]. To determine
all the non-zero eigenvalues - of T ff;fi let denote the associated
eigenvectors. From T ff;fi z = -z, one obtains
z
The set of the above equations can be written as
z
provided one sets
The characteristic equation of (26) is
and, since we are looking for - 6= 0; \Gamma4, the two zeros of the quadratic in (28) such
that r 1 6= r 2 satisfy
Consequently the solution to (26) is given by
If we arbitrarily put z use the restriction on z 0 from (27i), the coefficients
c 1 and c 2 can be determined. Next, using the restrictions on z m+1 from (27ii), and
(29ii), one arrives at
r 2m
So, we consider the three cases of the lemma:
m+1 and
from (29ii) one obtains the m distinct expressions for -(6= 0; \Gamma4) given in
(25i).
cos
and again from (29ii) we have the m distinct
values for -(6= 0; \Gamma4) in (25ii).
Working in the same way we obtain the
expressions
\Gamma4). If we incorporate the
as well we have the expressions in (25iii). 2
Lemma 3.3. Let the n \Theta n matrices A possess complete sets
of linearly independent eigenvectors y (i;j) , with corresponding eigenvalues - (j)
k). Then the matrix A j A
1\Omega A
:\Omega A k possesses the
linearly independent eigenvectors y (j) j y (1;j1
)\Omega y (2;j2
corresponding eigenvalues -
.
Proof. First, using tensor product properties we can easily verify that Ay
In order to prove the linear independence of the y (j) 's we construct the
whose columns are the n k eigenvectors of A in the following order
(1;1)\Omega y
y
(1;1)\Omega y
(1;1)\Omega y
y
(1;2)\Omega y
y
(1;2)\Omega y
(1;2)\Omega y
(1;n)\Omega y
\Theta y (i;1) y
. From the assumed linear independence
of y (i;j) , we conclude that Y \Gamma1
k exist. This implies the linear independence of the
eigenvectors and concludes the proof of the lemma. 2
Having developed the background material required we are able to go on with the
analysis of the three methods associated with the linear system (21): block Jacobi
(J),
block Extrapolated Jacobi (EJ),
and block Successive Overrelaxation (SOR)
where
and \GammaL and \GammaM are the strictly lower and the strictly upper triangular parts of the
matrix coefficient in (23).
The block Jacobi iteration matrix J associated with the
matrix coefficient in (21), can be described as
Let H and G denote the matrices
\Theta G T
of dimensions K \Theta resectively. Consider then the matrix
apparently, is a block diagonal matrix with diagonal blocks HG and GH
of orders K and respectively. Therefore we have (see Theorem 1.12, page
in [10]) that
However, we have
and each term in the above sum can be found by using (32), (17), (22) and simple
tensor product properties to be
For the second order LCSC, we introduce the matrices S j and find their value to be
6 I
For the fourth order LCSC, we have similarly
\Gamma2;\Gamma2
\Gamma2;\Gamma2
6 I
Due to the presence of the unit matrix factors in the tensor product form (36), all
possess a linearly independent set of K common eigenvectors if and only if
each H
possess complete sets of linearly independent
eigenvectors. For the matrices in (38) this is obvious because T j
real symmetric negative (or non-positive) definite, T 1
ff;fi is real symmetric negative
definite and, in addition, oe(T j
As one can readily see, each of the matrices S j in (39) is
the product of the two real symmetric positive definite matrices 1
\Gamma2;\Gamma2 ) and
(The second matrix factor might also be nonnegative definite.)
Therefore Lemma 3.1 (or its Note) applies. For the matrix A \Gamma1
4 in (40) first observe
that the first matrix term in the brackets is similar to12 (12I
\Gamma2;\Gamma2
which, in turn, is of exactly the same form as the matrices S j in (39), except that
the second matrix factor considered previously is now always positive definite. Con-
sequently, in all possible cases of the LCSC matrices, all terms H j G j in (36) possess
a linearly independent set of K common eigenvectors. By virtue of this result and in
view of Lemma 3.3, it is implied from (36) that
where -X is used to denote any eigenvalue of the matrix X. However, from the
previous discussion it follows that - S i - 0,
-HG - 0. So, from (34) it is implied that J 2 possesses non-positive eigenvalues and
hence the block Jacobi matrix J has a purely imaginary spectrum. From the analysis
so far it becomes clear that the eigenvalues of J 2 and therefore those of J can be given
analytically in the following cases:
(1) In all the cases of the second order LCSC we are dealing with when Dirichlet
and/or Neumann boundary conditions are imposed on the faces of @
(2) In the fourth order one when only Dirichlet boundary conditions are imposed on
the faces of @
This is an immediate consequence of the fact that each matrix S j ,
(37)-(40) is a simple real rational matrix function of the matrix T j
case (1) and of the matrix T j
\Gamma2;\Gamma2 in case (2), respectively.
Having in mind the various conclusions we have arrived at in the analysis so
far, one can state the following theorem which gives analytic expressions for the
eigenvalues of the block Jacobi matrix J in (32).
Theorem 3.4. The eigenvalues - of the block Jacobi iteration matrix
defined in (32) are as follows. We have
For the second order collocation scheme the others are pure imaginary
For the fourth order scheme, where (2a) is assumed to be subjected to Dirichlet boundary
conditions only, the others are pure imaginary
\Gamma2;\Gamma2
\Gamma2;\Gamma2
\Gamma1=2
In (42) and (43) - j
are the eigenvalues of the matrix T j
obtained by the
application of Lemma 3.2. 2
NOTE: It should be pointed out that analytic expressions for the eigenvalues of
J can not be derived, in terms of the matrices T j
ff;fi involved, in the case of the fourth
order LCSC where on at least one face of
@\Omega has Neumann (N) boundary conditions
imposed. This is due to the non-commutativity of the matrices T j
\Gamma2;\Gamma2 and T j
ff;fi for
(ff; fi) 6= (\Gamma2; \Gamma2). However, one can trivially give analytic expressions in terms of the
eigenvalues of the matrices (12I +T j
also, by
virtue of Lemma 3.1, lower and upper bounds can be given for the eigenvalues of H j
in terms of the extreme eigenvalues of T j
\Gamma2;\Gamma2 , and T j
of Lemma 3.2.
To derive the spectral radii of the Jacobi matrices of Theorem 3.4 and also upper
bounds in the case of the previous Note, we introduce some notations first. Let, then,
ff;fi denote an eigenvalue of the matrices T j
as in Theorem 3.4. To
simplify the notation we omit using another index on the generic - j
ff;fi and we omit
the index j from the pair of subscripts (ff; fi). Let also
denote the smallest and the largest eigenvalues in - j
Finally, define the functions
\Gamma2;\Gamma2 )y j
in terms of the eigenvalues - j
ff;fi of the matrices T j
k. Then we have:
Theorem 3.5. (i) The spectral radii of the block Jacobi matrices corresponding
to the collocation schemes considered in Theorem 3.4 are given by the following
expressions:
Second order.
Fourth order.
(ii) Moreover the expression (48) is also a strict upper bound for the square of the
spectral radius of the Jacobi matrix in the case of the fourth order scheme corresponding
to an elliptic problem (1b) where on at least one of the faces of
@\Omega has Neumann
(N) boundary conditions imposed.
Proof: We notice that in (42) and (43) ff
and Hence the functions y
in (45) are nonnegative
and independent of each other. So are the expressions z j := z j (- j
z(-) in (46). To determine the spectral radius of the block Jacobi matrix of the
second order collocation scheme we examine the extreme values of y
k. By differentiation we have
@-
and, consequently,
For the corresponding quantity of the fourth order scheme of Theorem 3.4, working
in a similar way, we obtain
@-
which implies that
It is obvious now using (42) - (46) and (49), (51), that the results (47) and (48) follow.
This concludes the proof for part (i) of the theorem.
For part (ii) we use the analysis preceding the statement of Theorem 3.4 and refer
to the matrices in (39) and (40), especially for those indices
(ff; fi) 6= (\Gamma2; \Gamma2), and also recall the Note immediately following Theorem 3.4. It
follows from this and Lemma 3.1, its Note and Lemma 3.2, that the lower and upper
bounds for the eigenvalues of the matrices in (39) and (40) depend directly on the
extreme eigenvalues of the two
\Gamma2;\Gamma2 ) and ff j
These are readily seen to be12 (12
for the minimum and the maximum eigenvalues for the former matrix and y j
for the latter one, where c j , s j and y j are the expressions in (44) and (45).
From Lemma 3.1, its Note, and using the expressions (46), the bound for the spectral
radius of the corresponding block Jacobi matrix is readily shown to be the expression
in the right hand side of (48).2
REMARK: The analysis so far has been made on the assumption that the x 1 -
direction is somehow predetermined. However, since there are k possible choices for
the x 1 -direction in any particular case, the choice should be made in such a way as
to give the smallest possible values in (47) and (48). Consider then the quantities
min
iB @
taken over all which they are well-defined and also the quantities
min iB @
z
z
C A
considered in the same way. The indices i in (51) and (52) for which the corresponding
minima take place should be interchanged with the index 1 and therefore the x i -
direction should be taken as the x 1 -direction. If, in either case (51) or (52), more
than one index i gives the same minimum value then any such i will do. It should
be noted that after having chosen the x 1 -direction this way, the expressions in (47)
and (48) give then the smallest possible values for the spectral radius or for an upper
bound on it, as was explained, for the particular problem at hand.2
From this point on, the analysis is almost identical with that in [5] and the
interested reader is referred to it. For completeness, we simply mention some of
the theoretical results obtained in [5], which are based on the corresponding theory
developed in [10], [11] and in [1], [2], [3], [4], [9].
(i) The block Extrapolated Jacobi (EJ) method and the block SOR method corresponding
to the block Jacobi method of this paper converge for values of their
parameters varying in some open intervals whose left end is 0 and the right one is a
function or ae(J). However, the optimum SOR is always superior to the optimum EJ
and the corresponding optimal parameters for the SOR method are given by
(ii) For the efficiency of both the serial and the parallel iterative solution of the
linear system (21), a cyclic natural ordering of the unknowns U i ,
adopted according to which
The equations in (16) are reordered according to the ordering of U 1 and each block
of the auxiliary conditions is reordered according to the ordering of the unknowns
U i . It is then proved that the new coefficient matrix A is obtained by a permutation
similarity transformation of the matrix coefficient A in (21) having the same k \Theta k
block structure and, therefore, the associated block Jacobi matrix J is similar to the
previous one J . Consequently, the convergence results are identically the same so that
all the formulas in connection with eigenvalues, spectral radii, etc. of the Jacobi, the
Extrapolated Jacobi and the SOR method studied in this section remain unchanged
when these reorderings are made.
(iii) The new structure of the collocation coefficient matrix A, for the second and
fourth order scheme in 2-dimensions, is given in [5].
4. Numerical Experiments.
In this section we summarize the results of some numerical experiments that verify
the convergence properties of the iterative solution methods analyzed in Section 3. We
mention that although we present numerical data for only the O(h 2 ), 3-dimensional
case, these are very representive of problems with different dimensionality or with
discretization schemes. For experimental data on the convergence properties
of the LCSC method and the iterative solvers in the case of only Dirichlet boundary
conditions the reader is referred to [5] and [6]. The parallel implementation details
and the performance of the iterative LCSC schemes, for 2-dimensional problems, on
several multiprocessing systems can be found in [7].
We have applied the LCSC discretization techniques with uniform (NGRID by
NGRID by NGRID) meshes to approximate the known solution of the following set
of PDEs.
x
z
PDE 2: 4D 2
y
z
PDE 3: D 2
9
z
Each is defined on the unit cube and subject to one of the following types of boundary
conditions.
1: Dirichlet conditions on all faces of \Omega\Gamma
2: Neumann conditions on the faces
of\Omega and Dirichlet ones on
the rest.
3: Neumann conditions on the face
of\Omega and Dirichlet on the rest.
These PDEs and boundary conditions are combined to give 9 problems, our theory
is applicable only to 6 of these (those excluding PDE 3). The right hand side f is
selected so that the true solution is always
The linear systems from the LCSC discretization were solved by the proposed SOR
iteration method with the termination criterion being that jjU
Table
The required number of SOR iterations to solve the LCSC equations for PDE 2 and various
boundary conditions.
the interval (0; 10 \Gamma7 ). All experiments were performed, in double precision, on a
workstation.
In
Figure
1 we present the theoretically estimated (using the material developed in
Section 3 when applicable) and the experimentally determined (by systematic search),
values of the optimumSOR relaxation parameter ! opt . Specifically, the points we plot
are the experimentally observed optimal values of ! for various values of NGRID.
The lines we plot show the relation (determined using (53) and Theorem (48) ) between
opt and the discretization parameter NGRID. Since our theory can not be
directly used to determine such relation in the case of PDE 3 we plot only the experimentally
determined ! opt .
Our first observation is that there is a good agreement between our theory and the
experiments in the six cases where it applies. The theoretical values for ! opt are close
to (though always larger than) the measured ones and exhibit the same dependence
on the discretization and PDE problem parameters. Besides confirming our theory,
these experiments also show that for PDEs where our analysis is not applicable the
proposed SOR scheme still converges. Furthermore relaxing with ! opt determined by
our theory leads us to comparable rates of convergence.
Another interesting observation is that ! opt seems to go fast and asymptotically to
a number in the interval [:03; :08]. It is therefore expected that the rate of convergence
will not decrease rapidly as NGRID increases beyond 30. This is confirmed in Table
1 where we observe that increasing NGRID from 24 to 32 increases the number of
iterations only by at most 30%.
Table
1 presents the SOR iterations required to solve the discretized equations
using the optimal value for the relaxation parameter ! and a 10 \Gamma7 stopping criterion
for the problems defined by PDE 2 and various boundary conditions. We also note here
that the measured discretization errors for all the experiments confirm the expected
second order of convergence of the collocation discretization scheme. Specifically, the
measured order in all cases is in the interval [1:8; 2:1].
--R
Iterative methods with k-part splittings
The optimal solution of the extrapolation problem of a first order scheme
Iterative line cubic spline collocation methods for elliptic partial differential equations in several dimensions
Spline collocation methods for elliptic partial differential equations
Convergence of O(h 4
On complex successive overrelaxation
Matrix Iterative Analysis
Iterative Solution of Large Linear Systems
--TR | SOR iterative method;collocation methods;elliptic partial differential equations |
342586 | A Class of Highly Scalable Optical Crossbar-Connected Interconnection Networks (SOCNs) for Parallel Computing Systems. | AbstractA class of highly scalable interconnect topologies called the Scalable Optical Crossbar-Connected Interconnection Networks (SOCNs) is proposed. This proposed class of networks combines the use of tunable Vertical Cavity Surface Emitting Lasers (VCSEL's), Wavelength Division Multiplexing (WDM) and a scalable, hierarchical network architecture to implement large-scale optical crossbar based networks. A free-space and optical waveguide-based crossbar interconnect utilizing tunable VCSEL arrays is proposed for interconnecting processor elements within a local cluster. A similar WDM optical crossbar using optical fibers is proposed for implementing intercluster crossbar links. The combination of the two technologies produces large-scale optical fan-out switches that could be used to implement relatively low cost, large scale, high bandwidth, low latency, fully connected crossbar clusters supporting up to hundreds of processors. An extension of the crossbar network architecture is also proposed that implements a hybrid network architecture that is much more scalable. This could be used to connect thousands of processors in a multiprocessor configuration while maintaining a low latency and high bandwidth. Such an architecture could be very suitable for constructing relatively inexpensive, highly scalable, high bandwidth, and fault-tolerant interconnects for large-scale, massively parallel computer systems. This paper presents a thorough analysis of two example topologies, including a comparison of the two topologies to other popular networks. In addition, an overview of a proposed optical implementation and power budget is presented, along with analysis of proposed media access control protocols and corresponding optical implementation. | as opposed to the L N2 log2Nlinks required for a
standard binary hypercube. This multiplexing greatly
reduces the link complexity of the entire network, reducing
implementation costs proportionately.
4.2 Network Diameter
The diameter of a network is defined as the minimum
distance between the two most distant processors in the
network. Since each processor in an OHC2N cluster can
communicate directly with every processor in each directly
connected cluster, the diameter of a OHC2N containing
NH n 2d processors is:
KH d log2 ; 9
which is dependent only on the degree of the hypercube
(the diameter and the degree of a hypercube network are
the same).
4.3 Bisection Width
The bisection width of a network is defined as the minimum
number of links in the network that must be broken to
partition the network into two equal sized halves. The
bisection width of a d-dimensional binary hypercube is
2d1; since that many links are connected between two
d 1-dimensional hypercubes to form a d-dimensional
hypercube. Since each link in an OHC2N contains
n channels, the bisection width of the OHC2N is:
which increases linearly with the number of processors.
A major benefit of such a topology is that a very large
number of processors can be connected with a relatively
small diameter and relatively fewer intercluster connec-
tions. For example, with n processors per cluster and
fiber links per cluster, 1,024 processors can be
connected with a high degree of connectivity and a high
bandwidth. The diameter of such a network is 6, which
implies a low network latency for such a large system, and
only 192 bidirectional intercluster links are required. If a
system containing the same number of processors is
constructed using a pure binary hypercube topology, it
would require a network diameter of 10, and 5,120
interprocessor links.
WEBB AND LOURI: A CLASS OF HIGHLY SCALABLE OPTICAL CROSSBAR-CONNECTED INTERCONNECTION NETWORKS (SOCNS) FOR. 449
4.4 Average Message Distance cluster in the network 1. This would increase
The average message distance for a network is defined as the size of the network by c:
the average number of links that a message should traverse
through the network. This is a slightly better measure of
network latency than the diameter, because it aggregates and would not effect the cluster node degree of the
distances over the entire network rather than just looking at network. This is very similar to the fixed-c case for the
the maximum distance. The average message distance l can OC2N configuration, and the granularity of size scaling in
be calculated as [27]: this case is also the number of clusters c. Again, this is the
easiest method for scaling because it does not require the
addition of any network hardware, and it more fully utilizes
l iNi; 11
the inherently high bandwidth of the WDM optical links.
The two topologies presented in this paper are by no
where Ni represents the number of processors at a distance i means the only two topologies that could be utilized to
from the reference processor, N is the total number of construct an SOCN class network. As an example, assume
processors in the network, and K is the diameter of the that an SOCN network exists that is configured in a torus
network. configuration, and the addition of some number of
Since the OHC2N is a hybrid of a binary hypercube and processors is required. The total number of processors
a crossbar network, the equation for the number of required in the final network may be the number supported
processors at a given distance in an OHC2N can be derived by an OHC2N configuration. In this case, the network
from the equation for a binary hypercube: could be reconfigured into an OHC2N configuration by
simply changing the routing of the intercluster links and
changing the routing algorithms. This reconfigurability
makes it conceivable to reconfigure an SOCN class network
and since each cluster in the OHC2N hypercube topology with a relatively arbitrary granularity of size scaling.
contains n processors, the number of processors at a
4.6 Fault Tolerance and Congestion Avoidance
distance i for an OHC2N can be calculated as:
Since the OHC2N architecture combines the edges of a
K hypercube network with the edges a crossbar network, the
tolerance and congestion avoidance schemes of both
architectures can be combined into an even more powerful
with the addition of n 1 for i 1 to account for the
congestion avoidance scheme. Hypercube routers typically
processors within the local cluster. Substituting into (11)
scan the bits of the destination address looking for a
and computing the summation gives the equation for the
difference between the bits of the destination address and
average messages distance for the OHC2N:
the routers address. When a difference is found, the
KN message is routed along that dimension. If there are
multiple bits that differ, the router may choose any of those
dimensions along which to route the message. The number
Substituting in the diameter of the OHC2N produces: of redundant links available from a source processor along
an optimum path to the destination processor is equal to the
Hamming distance between the addresses of the two
respective processors. If one of the links are down, or if
4.5 Granularity of Size Scaling of the OHC2N one of the links is congested due to other traffic being
routed through the connecting router, the message can be
For an OHC2N hypercube connected crossbar network
routed along one of the other dimensions.
containing a fixed number of processors per cluster n,we In addition, the crossbar network connections between
can increase the network size by increasing the size of the clusters greatly increases the routing choices of the routers.
second level hypercube topology. Since the granularity of The message only must be transmitted using the wave-
size scaling for an c-processor hypercube is c, it would length of the destination processor when it is transmitted
require the addition of c clusters to increase the size of the over the last link in the transmission (the link that is directly
OHC2N in the fixed-n case c2 2c1. Increasing the size of connected to the destination processor). A message can be
the OHC2N in the fixed-n case would also require adding transmitted on any channel over any other link along the
routing path. This means that each router along the path of
another intercluster link to each cluster in the network,
the message traversal not only has a choice of links based on
increasing the intercluster node degree by one. In this case,
the hypercube routing algorithm, but also a choice of n
the granularity of size scaling is: different channels along each of those links. The router may
d choose any of the n links that connect the local cluster to the
remote cluster. This feature greatly increases the fault
If we assume, instead, the fixed-c case, then we can increase tolerance of the network as well as the link load balancing
the network size by adding another processor to each and congestion avoidance properties of the network.
450 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 11, NO. 5, MAY 2000
Comparison of Size, Degree, Diameter, and Number of Links of Several Popular Networks
number of processors per cluster, number of clusters, number of processors per bus/ring/multichannel link, and
total number of processors.
As an example, if the Hamming distance between the
cluster address of the current router and the destination
processor cluster address is equal to b, the router will have a
choice b n different channels with which to choose to
route to. Even if all the links along all the routing
dimensions for the given message are down or are
congested, the message can still be routed around the
failure/congestion via other links along nonoptimal paths,
as long as the network has not been partitioned. In addition,
if nonshortest path routing algorithms are used to further
reduce network congestion, many more route choices are
made available.
5COMPARISON TO OTHER POPULAR NETWORKS
In this section, we present an analysis of the scalability of
the SOCN architecture with respect to several scalability
parameters. Bisection width is used as a measure of the
bandwidth of the network, and diameter and average
message distance are used as measures of the latency of the
network. Common measures of the cost or complexity of an
interconnection network are the node degree of the network
and the number of interconnection links. The node degree
and number of links in the network relates to the number of
parts required to construct the network. Cost, though, is
also determined by the technology, routing algorithms, and
communication protocols used to construct the network.
Traditionally, optical interconnects have been considered a
more costly alternative to electrical interconnects, but recent
advances in highly integrated, low power arrays of emitters
(e.g., VCSELs and tunable VCSELs) and detectors, inexpensive
polymer waveguides, and low cost microoptical
components can reduce the cost and increase the scalability
of high performance computer networks, and can make
higher node degrees possible and also cost effective.
Both the OC3N and OHC2N configurations are compared
with several well-known network topologies that
have been shown to be implementable in optics. These
network topologies include: a traditional Crossbar network
(CB), the Binary Hypercube (BHC) [27], the Cube Connected
Cycles (CCC) [28], the Torus [29], the Spanning Bus
Hypercube (SBH) [30], and the Spanning Multichannel
Linked Hypercube (SMLH) [25]. Each of these networks
will be compared with respect to degree, diameter, number
of links, bisection bandwidth, and average message dis-
tance. There are tradeoffs between the OC3N and OHC2N
configuration, and other configurations of a SOCN class
network might be considered for various applications, but it
will be shown that the OC3N and OHC2N provide some
distinct advantages for medium sized to very large-scale
parallel computing architectures.
Various topological characteristics of the compared
networks are shown in Tables 1 and 2. The notation
OC3Nn 16;c implies that the number of processors
per cluster n is fixed and the number of clusters c is
changed in order to vary the number of processors N. The
notation OHC2Nn 16;d implies that the number of
processors per cluster n is fixed, and the dimensionality of
Comparison Bisection Bandwidth and Average Message Distance for Several Popular Networks
WEBB AND LOURI: A CLASS OF HIGHLY SCALABLE OPTICAL CROSSBAR-CONNECTED INTERCONNECTION NETWORKS (SOCNS) FOR. 451
the hypercube d is varied. The number of processors is the
only variable for a standard crossbar, so CBN implies a
crossbar containing N processors. For the binary hypercube,
the dimensionality of the hypercube d varies with the size of
the network. The notation CCCd implies that the number
of dimensions of the Cube Connected Cycles d varies. The
notation Torusw; d 3 implies that the dimensionality d is
fixed and the size of the rings n varies with the number of
processors. The notation SBHw 3;d implies that the
size of the buses in the SBH network w remains constant
while the dimensionality d changes. The notation
SMLHw 32;d denotes that the number of multichannel
links w is kept constant and the dimensionality of the
hypercube d is varied.
5.1 Network Degree
Fig. 4 shows a comparison of the node degree of various
networks with respect to system size (number of processing
elements). It can be seen that for medium size networks
containing 128 processors or less, the two examples OC3N
networks provide a respectable cluster degree of 4 for a
OC3Nn 16;c configuration, and 8 for a OC3Nn 32;c
configuration. This implies that a fully connected crossbar
network can be constructed for a system containing 128
processors with a node degree as low as 4. A traditional
crossbar would, of course, require a node degree of 127 for
the same size system.
The node degrees of the OHC2Nn 16;d and
OHC2Nn 32;d configurations are very respectable for
much larger system sizes. For a system containing on the
order of 10; 000 processor, both the OHC2Nn 16;d and
the OHC2Nn 32;d configurations would require a node
degree of around 7-8, which is comparable to most of the
other networks, and much better than some.
5.2 Network Diameter
Fig. 5 shows a comparison of the diameter of various
networks with respect to the system size. The network
diameter is a good measure of the maximum latency of the
network because it is the length of the shortest path
between the two most distant nodes in the network. Of
course, the diameter of the OC3N network is the best
because each node is directly connected to every other
node, so the diameter of the OC3N network is identically 1.
As expected, the diameter of the various OHC2N
networks scale the same as the BHC network, with a fixed
negative bias due to the number of channels in each
crossbar. The SMLHw; d networks also scale the same as
the BHC network, with a larger fixed bias. For a 10; 000
processor configuration, the various OHC2N networks are
comparable or better than most of the comparison net-
works, although the SMLHw; d networks are better
because of their larger inherent fixed bias.
5.3 Number of Network Links
The number of links (along with the degree of the network)
is a good measure of the overall cost of implementing the
network. Ultimately, each link must translate into some sort
of wire(s), waveguide(s), optical fiber(s), or at least some set
of optical components (lenses, gratings, etc. It should be
noted that this is a comparison of the number of
interprocessor/intercluster links in the network and a link
could consist of multiple physical data paths. For example,
an electrical interface would likely consist of multiple wires.
The proposed optical implementation of a SOCN crossbar
consists of an optical fiber pair (send and receive) per
intercluster link.
Fig. 6 shows a plot of the number of network links with
respect to the number of processors in the system. The
OC3N network compares very well for small to medium
sized systems, although the number of links could become
prohibitive when the number of processors gets very large.
The OHC2N network configurations show a much better
scalability in the number of links for very large-scale
systems. For the case of around 10; 000 processors, the
OHC2Nn 32;d network shows greater than an order of
magnitude less links than any other network architecture.
5.4 Bisection Width
The bisection width of a network is a good measure of the
overall bandwidth of the network. The bisection width of a
network should scale close to linearly with the number of
processors for a scalable network. If the bisection width
does not scale well, the interconnection network will
become a bottleneck as the number of processors is
increased.
Fig. 7 shows a plot of the bisection width of various
network architectures with respect to the number of
processors in the system. Of course, the OC3N clearly
provides the best bisection width because the number of
interprocessor links in an OC3N increases as a factor of
ON2with respect to the number of processors. the
OHC2N configurations are very comparable to the best of
the remaining networks, and are much better than some of
the less scalable networks.
5.5 Average Message Distance
The average message distance within a network is a good
measure of the overall network latency. The average
message distance can be a better measure of network
latency than the diameter of the network because the
average message distance is aggregated over the entire
network and provides an average latency rather than the
maximum latency.
Fig. 8 shows a plot of the average message distance with
respect to the number of processors in the system. Of
course, the OC3N provides the best possible average
message distance of 1 because each processor is connected
to every other processor. The OHC2N network configurations
displays a good average message distance for medium
to very large-scale configurations, which is not as good as
the average message distance of the SMLH networks, but is
much better than the remaining networks.
6OPTICAL IMPLEMENTATION OF THE SOCN
Tunable VCSELs provide a basis for designing compact all-optical
crossbars for high speed multiprocessor intercon-
nects. An overview of a compact all-optical crossbar can be
seen in Fig. 9. A single tunable VCSEL and a single fixed-
frequency optical receiver are integrated onto each processor
in the network. This tight coupling between the optical
452 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 11, NO. 5, MAY 2000
Fig. 4. Comparison of network degree with respect to system size for various networks.
Fig. 5. Comparison of network diameter with respect to system size for various networks.
Fig. 6. Comparison of the total number of network interconnection links with respect to system size for various networks.
WEBB AND LOURI: A CLASS OF HIGHLY SCALABLE OPTICAL CROSSBAR-CONNECTED INTERCONNECTION NETWORKS (SOCNS) FOR. 453
Fig. 7. Comparison of the bisection width with respect to system size for various networks.
Fig. 8. Comparison of the average message distance within the network with respect to system size for various networks.
transceivers and the processor electronics provides an all-optical
path directly from processor to processor, taking full
advantage of the bandwidth and latency advantages of
optics in the network.
The optical signal from each processor is directly
coupled into polymer waveguides that route the signal
around the PC board to a waveguide based optical
combiner network. Polymer waveguides are used for this
design because they provide a potentially low cost, all-optical
signal path that can be constructed using relatively
standard manufacturing techniques. It has been shown that
polymer waveguides can be constructed with relatively
small losses and greater than 30dB crosstalk isolation with
waveguide dimensions on the order of 50m 50m and
with a 60m pitch [31], implying that a relatively large-scale
crossbar and optical combiner network could be constructed
within an area of just a few square centimeters.
The combined optical signal from the optical combiner is
routed to a free-space optical demultiplexer/crossbar.
Within the optical demultiplexer, passive free-space optics
is utilized to direct the beam to the appropriate destination
waveguide. As can be seen in the inset in Fig. 9, the beam
emitted from the input optical waveguide shines on a
concave, reflective diffraction grating that diffracts the
beam through a diffraction angle that is dependent on the
wavelength of the beam, and focuses the beam on the
appropriate destination waveguide. The diffraction angle
varies with the wavelength of the beam, so the wavelength
of the beam will define which destination waveguide, and
hence, which processor receives the transmitted signal.
Each processor is assigned a particular wavelength that it
will receive based on the location of its waveguide in the
output waveguide array. For example, for processor 1 to
transmit to processor 3, processor 1 would simply transmit
on the wavelength assigned to processor 3 (e.g., 3). If each
processor is transmitting on a different wavelength, each
signal will be routed simultaneously to the appropriate
destination processor. Ensuring that no two processors are
transmitting on the same wavelength is a function of the
454 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 11, NO. 5, MAY 2000
Fig. 9. A proposed compact optical crossbar consisting of polymer waveguides directly coupled to processor mounted VCSELs, a polymer
waveguide based optical combiner, and a compact free-space optical crossbar/demultiplexer. The proposed optical crossbar can be connected to
remote processors using a single optical fiber or connected locally by eliminating the optical fiber.
media access control (MAC) protocol (detailed in a later
section).
After routing through the free-space optical demultiplex-
er, the separate optical signals are routed to the appropriate
destination processor via additional integrated optical
waveguides. As can be seen in Fig. 9, the combined optical
signal between the optical combiner network and the
demultiplexer can be coupled into a single optical fiber to
route to a remote PC board to implement an intercluster
optical crossbar, or a short length of polymer waveguide
could replace the optical fiber to implement a local
(intracluster) optical crossbar.
A power budget and signal-to-noise ratio (SNR) analysis
have been conducted for the intracluster and intercluster
optical crossbars [32], [33], [34]. The result of the power
budget analysis is shown in Table 3. Assuming a necessary
receiver power of 30dBm, a VCSEL power of 2dBm [35]
and a required bit-error rate (BER) of 1015,itwas
determined that with current research level technology,
processors could be supported by such a network with
processors very nearly possible.
Details of the optical implementation of the SOCN
crossbar interconnect and a thorough analysis of the optical
implementation can be found in references [32], [33], [34],
and [36].
7MEDIA ACCESS IN THE SOCN
An SOCN network contains a local intracluster WDM
subnetwork and multiple intercluster WDM subnetworks at
each processing node. Each of these intracluster and
intercluster subnetworks has its own medium that is shared
by all processors connected to the subnetwork. Each of
these subnetworks are optically isolated, so the media
access can be handled independently for each subnetwork.
One advantage of an SOCN network is that each subnetwork
connects processors in the same cluster to processors
in a single remote cluster. The optical media are shared only
among processors in the same cluster. This implies that
media access control interaction is only required between
processors on the same cluster. Processors on different
clusters can transmit to the same remote processor at the
same time, but they will be transmitting on different media.
This could cause conflicts and contention at the receiving
processor, but these conflicts are an issue of flow control,
which is not in the scope of this paper.
7.1 SOCN MAC Overview
In a SOCN network, the processors cannot directly sense
the state of all communication channels that they have
access to, so there must be some other method for
processors to coordinate access to the shared media. One
method of accomplishing this is to have a secondary
broadcast control/reservation channel. This is particularly
advantageousinaSOCNclasnetworkbecausethe
coordination need only happen among processors local to
the same cluster. This implies that the control channel can
be local to the cluster, saving the cost of running more
intercluster cabling, and ensuring that it can be constructed
with the least latency possible. For control-channel based
networks, the latency of the control channel is particularly
critical because a channel must be reserved on the control
channel before a message is transmitted on the data
network, so the latency of the control channel adds directly
to the data transfer latency when determining the overall
network latency.
Since there are multiple physical channels at each cluster
(the local intracluster network and the various intercluster
network connections), it is conceivable that each physical
data channel could require a dedicated control channel.
WEBB AND LOURI: A CLASS OF HIGHLY SCALABLE OPTICAL CROSSBAR-CONNECTED INTERCONNECTION NETWORKS (SOCNS) FOR. 455
(in dB) for Each Component of the Optical Crossbar
Fortunately, each physical data channel on a given cluster is
shared by the same set of processors, so it is possible to
control access to all data channels on a cluster using a single
control channel at each cluster. Each WDM channel on each
physical channel is treated as a shared channel, and MAC
arbitration is controlled globally over the same control
channel.
7.2 A Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) MAC Protocol
If we assume that a control channel is required, one possible
implementation of a MAC protocol would be to allow
processors to broadcast channel allocation requests on the
control channel prior to transmitting on the data channel. In
this case, some protocol would need to be devised to resolve
conflicts on the control channel. One candidate might be the
Carrier Sense Multiple Access/Collision Detection (CSMA/
CD) protocol.
Running CSMA/CD over the control channel to request
access to the shared data channels is similar to standard
CSMA/CD protocols, such as that used in Ethernet net-
works, except that Ethernet is a broadcast network, where
each node can see everything that is transmit, so the
CSMA/CD used within ethernet is run over the data
network and a separate control channel is not required.
There are some advantages to using CSMA/CD as a media
access control protocol. The primary advantage is that the
minimum latency for accessing the control channel is zero.
The primary disadvantage to using such a protocol for a
SOCN based system is that it requires that state information
be maintained at each node in the network. Each processing
node must monitor the control channel and track which
channels have been requested. When a channel is re-
quested, each processor must remember the request so that
it will know if the channel is busy when it wishes to
transmit. There is also a question about when a data
channel becomes available after being requested. A node
could be required to relinquish the data channel when it is
finished with it by transmitting a data channel available
message on the control channel, but this would double the
utilization of the control channel, increasing the chances of
conflicts and increasing latency. The requirement that a
large amount of state information be maintained at each
node also increases that chances that a node could get out-
of-sync, creating conflicts and errors in the data network.
7.2.1 A THORN-Based Media Access Control Protocol
Another very promising control channel based media
access control protocol was proposed for the HORN
network [24]. This protocol, referred to as the Token
Hierarchical Optical Ring Network protocol (THORN) is a
token based protocol based on the Decoupled Multichannel
Optical Network (DMON) protocol [37]. In the THORN
protocol, tokens are passed on the control channel in a
virtual token ring. As can be seen in Fig. 10, THORN tokens
contain a bit field containing the active/inactive state of
each of the data channels. There is also a bit field in the
token that is used to request access to a channel that is
currently busy. In addition, there is an optional payload
field that can be used to transmit small, high priority data
packets directly over the control channel. All state information
is maintained in the token, so local state information is
not be required at the processing nodes in the network,
although processors may store the previous token state in
the eventuality that a token might be lost by a processor
going down or other network error. In this eventuality, the
previous token state could be used to regenerate the token.
This still requires that processors maintain a small amount
of state information, but this state information would be
constantly refreshed and would seldom be used, so the
chances of the state becoming out-of-sync is minimal.
As can be seen in Fig. 11, there is a single control channel
for any number of data channels, and tokens are continuously
passed on the control channel that hold the entire
state of the data channels. If a processing node wishes to
transmit on a particular data channel, it must wait for the
token to be received over the control channel. It then checks
Fig. 10. The layout of a THORN-based token request packet. Each token packet contains one bit per channel for busy status and one bit per channel
for the channel requests. The token packet also contains an optional payload for small, low latency messages.
456 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 11, NO. 5, MAY 2000
Fig. 11. A timing diagram for control tokens and data transfers in a SOCN architecture using a form of the THORN protocol. A node may transmit on
a data channel as soon as it acquires the appropriate token bit. Setting the request bit forces the relinquishing of the data channel.
the the busy bit of the requested data channel to see if it is
set. If the busy bit is not set, then the data channel is not
currently active, and the processing node can immediately
begin transmitting on the data channel. It must also
broadcast the token, setting the busy bit in the data channel
that it is transmitting on. If the busy bit is already set, it
implies that some other transmitter is currently using the
requested data channel. If this is the case, then the
processing node must set the request bit of the desired
data channel, which indicates to the processing node that is
currently transmitting on the desired channel that another
transmitter is requesting the channel.
A disadvantage of a token-ring based media access
control protocol is that the average latency for requesting
channels will likely be higher that with a CSMA/CD
protocol. If we assume a single control channel per cluster,
with a cluster containing n processors and m physical data
channels (one intracluster subnetwork and m 1 intercluster
subnetworks), the control token would contain n m
bits busy bits and n m request bits. For example, if a
system contains n processors per cluster and m 8
WDM subnetwork links, the control token would require
128 busy bits and 128 request bits. If we assume a control
channel bandwidth of 2Gbps, and if we ignore the
possibility of a token payload, we can achieve a maximum
token rotation time (TRT) of 128ns. This is assuming that a
node starts retransmitting the token as soon at it starts
receiving the token, eliminating any token holding latency.
This would imply a minimum latency for requesting a
channel of close to zero (assuming the token is just about to
arrive at the requesting processing node) and up to a
maximum of 128ns, which would give an average control
channel imposed latency of approximately 64ns. If a lower
latency is required, a CSMA/CD protocol could be
implemented, or multiple control channels could be
constructed that would reduce the latency proportionally.
7.3 Control Channel Optical Implementation
Irrespective of the media access control protocol, a
dedicated control channel is required that is broadcast to
each processor sharing transmit access to each data channel.
Since each physical data channel is shared among only
processors within the same cluster, the control channel can
be implemented local to the cluster. This will simplify the
design and implementation of the control channel because it
will not require routing extra optical fibers between
clusters, and will not impose the optical loss penalties
associated with routing the optical signals off the local
cluster.
An implementation of a broadcast optical control
channel is depicted in Fig. 12. The optical signal from a
dedicated VCSEL on each processor is routed through a
polymer waveguide based star coupler that combines all the
signals from all the processors in the cluster and broadcasts
the combined signals back to each processor, creating
essentially an optical bus. The primary limitation of a
broadcast based optical network is the optical splitting
losses encountered in the star coupler. Using a similar
system as a basis for a power budget estimation [38] yields
an estimated optical loss in the control network of
approximately 8dB 3dB log2n (Table 4), which
would support approximately 128 processor per cluster
on the control channel if we assume a minimum required
receiver power of 30dBm and a VCSEL power of 2dBm.
Again, the optical implementation of the SOCN MAC
network has been throughly analyzed, but due to page
limitation the analysis could not be included in this article.
Fig. 12. An optical implementation of a dedicated optical control bus
using an integrated polymer waveguide-based optical star coupler.
WEBB AND LOURI: A CLASS OF HIGHLY SCALABLE OPTICAL CROSSBAR-CONNECTED INTERCONNECTION NETWORKS (SOCNS) FOR. 457
(in dB) for Each Component of the Optical Crossbar
This paper presents the design of a proposed optical
[10]
network that utilizes dense wavelength division multi-
plexing for both intracluster and intercluster communication
links. This novel architecture fully utilizes the benefits
of wavelength division multiplexing to produce a highly
scalable, high bandwidth network with a low overall
latency that could be very cost effective to produce. A
[13]
design for the intracluster links, utilizing a simple grating
multiplexer/demultiplexer to implement a local free space [14]
crossbar switch was presented. A very cost effective
implementation of the intercluster fiber optic links was [15]
also presented that utilizes wavelength division multiplexing
to greatly reduce the number of fibers required for
interconnecting the clusters, with wavelength reuse being [16]
utilized over multiple fibers to provide a very high degree
of scalability. The fiber-based intercluster interconnects
[17]
presented could be configured to produce a fully connected
crossbar network consisting of tens to hundreds of [18]
processors. They could also be configured to produce a
hybrid network of interconnected crossbars that could be [19]
scalable to thousands of processors. Such a network
architecture could provide the high bandwidth, low latency
communications required to produce large distributed
shared memory parallel processing systems.
--R
Interconnection Networks and Engineering Approach
High Performance Computing: Challenges for Future
The Stanford Dash
Homogeneous Hierarchical Interconnection Structures
Cray Research Inc.
Hierarchical Multiprocessor Interconnection Networks with Area
Interconnection Network of Hypercubes
Parallel and Distributed Systems
for Multicomputer Systems
Scalable Photonic Architectures for High Performance Processor
Newsletters of the Computer Architecture Technical Committee
Computer Architecture: A
Quantitative Approach.
Optical Information Processing.
Optical Computer Architectures: The Application of
Optical Concepts to Next Generation Computers.
An Introduction to Photonic Switching Fabrics.
and Multicomputers
A Gradually Scalable Optical Interconnection
Network for Massively Parallel Computing
and Distributed Systems
and Choices
Versatile Network for Parallel Computation
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
projects involved with parallel and distributed processing
He is a current member of the IEEE.
He has published numerous journal and
conference articles on the above topics.
Article of
he was the recipient of the Advanced Telecommunications Organization
of Japan Fellowship
Scientifique (CNRS)
of the Japanese Society for the Promotion of Science Fellowship.
with the Computer Research Institute at the University of Southern
served as a member of the Technical Program Committee of several
OSA/IEEE Conference on Massively Parallel Processors using Optical
edu/department/ocppl.
--TR
--CTR
Lachlan L. H. Andrew, Fast simulation of wavelength continuous WDM networks, IEEE/ACM Transactions on Networking (TON), v.12 n.4, p.759-765, August 2004
Roger Chamberlain , Mark Franklin , Praveen Krishnamurthy , Abhijit Mahajan, VLSI Photonic Ring Multicomputer Interconnect: Architecture and Signal Processing Performance, Journal of VLSI Signal Processing Systems, v.40 n.1, p.57-72, May 2005
David Er-el , Dror G. Feitelson, Communication Models for a Free-Space Optical Cross-Connect Switch, The Journal of Supercomputing, v.27 n.1, p.19-48, January 2004
Ahmed Louri , Avinash Karanth Kodi, An Optical Interconnection Network and a Modified Snooping Protocol for the Design of Large-Scale Symmetric Multiprocessors (SMPs), IEEE Transactions on Parallel and Distributed Systems, v.15 n.12, p.1093-1104, December 2004
Peter K. K. Loh , W. J. Hsu, Fault-tolerant routing for complete Josephus cubes, Parallel Computing, v.30 n.9-10, p.1151-1167, September/October 2004
Nevin Kirman , Meyrem Kirman , Rajeev K. Dokania , Jose F. Martinez , Alyssa B. Apsel , Matthew A. Watkins , David H. Albonesi, Leveraging Optical Technology in Future Bus-based Chip Multiprocessors, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.492-503, December 09-13, 2006 | multiprocessor interconnection;parallel architectures;scalability;wavelength division multiplexing;crossbars;optical interconnections;networks;hypercubes |
343378 | On Hoare logic and Kleene algebra with tests. | We show that Kleene algebra with tests (KAT) subsumes propositional Hoare logic (PHL). Thus the specialized syntax and deductive apparatus of Hoare logic are inessential and can be replaced by simple equational reasoning. In addition, we show that all relationally valid inference rules are derivable in KAT and that deciding the relational validity of such rules is PSPACE-complete. | INTRODUCTION
Hoare logic, introduced by C. A. R. Hoare in 1969 [Hoare 1969], was the first formal
system for the specification and verification of well-structured programs. This
pioneering work initiated the field of program correctness and inspired dozens of
technical articles [Cook 1978; Clarke et al. 1983; Cousot 1990]. For this achievement
among others, Hoare received the Turing Award in 1980.
Hoare logic uses a specialized syntax involving partial correctness assertions
The support of the National Science Foundation under grant CCR-9708915 is gratefully
acknowledged. This paper is a revised and expanded version of [Kozen 1999]. Address:
Department of Computer Science, Cornell University, Ithaca, NY 14853-7501, USA. Email:
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for profit or direct commercial
advantage and that copies show this notice on the first page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must
be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, to redistribute to lists, or to use any component of this work in other works, requires prior
specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM
Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.
(PCAs) of the form fbg p fcg and a deductive apparatus consisting of a system
of specialized rules of inference. Under certain conditions, these rules are relatively
complete [Cook 1978]; essentially, the propositional fragment of the logic can be
used to reduce partial correctness assertions to static assertions about the underlying
domain of computation.
In this paper we show that this propositional fragment, which we call propositional
Hoare logic (PHL), is subsumed by Kleene algebra with tests (KAT), an
equational algebraic system introduced in [Kozen 1997]. The reduction transforms
PCAs to ordinary equations and the specialized rules of inference to equational
implications (universal Horn formulas). The transformed rules are all derivable in
KAT by pure equational reasoning. More generally, we show that all Hoare-style
inference rules of the form
(1)
that are valid over relational models are derivable in KAT; this is trivially false for
PHL. We also show that deciding the relational validity of such rules is PSPACE -
complete.
A Kleene algebra with tests is defined simply as a Kleene algebra with an embedded
Boolean subalgebra. Possible interpretations include the various standard
relational and trace-based models used in program semantics, and KAT is complete
for the equational theory of these models [Kozen and Smith 1996]. This work shows
that the reasoning power represented by propositional Hoare logic is captured in a
concise, purely equational system KAT that is complete over various natural classes
of interpretations and whose exact complexity is known. Thus for all practical purposes
KAT can be used in place of the Hoare rules in program correctness proofs.
1.1 Related Work
Equational logic possesses a rich theory and is the subject of numerous papers
and texts [Taylor 1979]. Its power and versatility in program specification and
verification are widely recognized [O'Donnell 1985; Goguen and Malcolm 1996].
The equational nature of Hoare logic has been observed previously. Manes and
Arbib [Manes and Arbib 1986] formulate Hoare logic in partially additive semirings
and categories. The encoding of the PCA fbg p fcg as the equation
observed there. They consider only relational models and the treatment of iteration
is infinitary. Bloom and '
Esik [Bloom and '
Esik 1991] reduce Hoare logic to the
equational logic of iteration theories. They do not restrict their attention to while
programs but capture all flowchart schemes, requiring extra notation for insertion,
tupling, and projection. Their development is done in the framework of category
theory. Semantic models consist of morphisms in algebraic theories, a particular
kind of category. Other related work can be found in [Bloom and '
Esik 1992; Main
and Black 1990].
The encoding of the while programming constructs using the regular operators
and tests originated with propositional dynamic logic (PDL) [Fischer and Ladner
1979]. Although strictly less expressive than PDL, KAT has a number of advantages:
(i) it isolates the equational part of PDL, allowing program equivalence proofs to be
expressed in their natural form; (ii) it conveniently overloads the operators +; \Delta; 0; 1,
On Hoare Logic and Kleene Algebra with Tests \Delta 3
allowing concise and elegant algebraic proofs; (iii) it is PSPACE-complete [Cohen
et al. 1996], whereas PDL is EXPTIME-complete [Fischer and Ladner 1979]; (iv)
interpretations are not restricted to relational models, but may be any algebraic
structure satisfying the axioms; and (v) it admits various general and useful algebraic
constructions such as the formation of algebras of matrices over a KAT, which
among other things allows a natural encoding of automata.
Halpern and Reif [Halpern and Reif 1983] prove PSPACE-completeness of strict
deterministic PDL, but neither the upper nor the lower bound of our PSPACE -
completeness result follows from theirs. Not only are PDL semantics restricted
to relational models, but the arguments of [Halpern and Reif 1983] depend on an
additional nonalgebraic restriction: the relations interpreting atomic programs must
be single-valued. Without this restriction, even if only while programs are allowed,
PDL is exponential time hard. In contrast, KAT imposes no such restrictions.
In Section 2 we review the definitions of Hoare logic and Kleene algebra with
tests. In Section 3 we reduce PHL to KAT and derive the Hoare rules as theorems
of KAT. In Section 4 we strengthen this result to show that KAT is complete for
relationally valid rules of the form (1). In Section 5 we prove that the problem of
deciding the relational validity of such rules is PSPACE-complete.
2. PRELIMINARY DEFINITIONS
2.1 Hoare Logic
Hoare logic is a system for reasoning inductively about well-structured programs.
A comprehensive introduction can be found in [Cousot 1990].
A common choice of programming language in Hoare logic is the language of
while programs. The first-order version of this language contains a simple assignment
x := e, conditional test if b then p else q, sequential composition
a looping construct while b do p.
The basic assertion of Hoare logic is the partial correctness assertion (PCA)
where b and c are formulas and p is a program. Intuitively, this statement asserts
that whenever b holds before the execution of the program p, then if and when p
halts, c is guaranteed to hold of the output state. It does not assert that p must
halt.
Semantically, programs p in Hoare logic and dynamic logic (DL) are usually interpreted
as binary input/output relations p M on a domain of computation M, and
assertions are interpreted as subsets of M [Cook 1978; Pratt 1978]. The definition of
the relation p M is inductive on the structure of p; for example, (p
the ordinary relational composition of the relations corresponding to p and q. The
meaning of the PCA (2) is the same as the meaning of the DL formula b ! [p]c,
where ! is ordinary propositional implication and the modal construct [p]c is interpreted
in the model M as the set of states s such that for all (s; the
output state t satisfies c.
Hoare logic provides a system of specialized rules for deriving valid PCAs, one
rule for each programming construct. The verification process is inductive on the
structure of programs. The traditional Hoare inference rules are:
Assignment rule.
Composition rule.
Conditional rule.
fcg if b then p else q fdg
While rule.
fcg while b do p f:b - cg
Weakening rule.
Propositional Hoare logic (PHL) consists of atomic proposition and program sym-
bols, the usual propositional connectives, while program constructs, and PCAs
built from these. Atomic programs are interpreted as binary relations on a set M
and atomic propositions are interpreted as subsets of M. The deduction system of
PHL consists of the composition, conditional, while, and weakening rules (4)-(7)
and propositional logic. The assignment rule (3) is omitted, since there is no first-order
relational structure over which to interpret program variables; in practice, its
role is played by PCAs over atomic programs that are postulated as assumptions.
In PHL, we are concerned with the problem of determining the validity of rules
of the form
over relational interpretations. The premises fb i take the place of the
assignment rule (3) and are an essential part of the formulation.
2.2 Kleene Algebra
Kleene algebra (KA) is the algebra of regular expressions [Kleene 1956; Conway
1971]. The axiomatization used here is from [Kozen 1994]. A Kleene algebra is an
algebraic structure (K; +; \Delta; ; 0; 1) that is an idempotent semiring under +; \Delta; 0; 1
satisfying
where - refers to the natural partial order on K:
On Hoare Logic and Kleene Algebra with Tests \Delta 5
The operation + gives the supremum with respect to the natural order -. Instead
of (11) and (12), we might take the equivalent axioms
These axioms say essentially that behaves like the Kleene asterate operator of
formal language theory or the reflexive transitive closure operator of relational
algebra.
Kleene algebra is a versatile system with many useful interpretations. Standard
models include the family of regular sets over a finite alphabet; the family of binary
relations on a set; and the family of n \Theta n matrices over another Kleene algebra.
Other more unusual interpretations include the min,+ algebra used in shortest
path algorithms and models consisting of convex polyhedra used in computational
geometry [Iwano and Steiglitz 1990].
The following are some typical identities that hold in all Kleene algebras:
(p q) p
All the operators are monotone with respect to -. In other words, if p - q, then
pr - qr, rp - rq, r.
The completeness result of [Kozen 1994] says that all true identities between
regular expressions interpreted as regular sets of strings are derivable from the
axioms of Kleene algebra. In other words, the algebra of regular sets of strings over
the finite alphabet \Sigma is the free Kleene algebra on generators \Sigma. The axioms are
also complete for the equational theory of relational models.
See [Kozen 1994] for a more thorough introduction.
2.3 Kleene Algebra with Tests
Kleene algebras with tests (KAT) were introduced in [Kozen 1997] and their theory
further developed in [Kozen and Smith 1996; Cohen et al. 1996]. A Kleene algebra
with tests is just a Kleene algebra with an embedded Boolean subalgebra. That is,
it is a two-sorted structure
such that
-(K; +; \Delta; ; 0; 1) is a Kleene algebra,
is a Boolean algebra, and
-B ' K.
The Boolean complementation operator is defined only on B. Elements of B are
called tests. The letters p; q; arbitrary elements of K and a; b; c denote
tests.
This deceptively simple definition actually carries a lot of information in a concise
package. The operators +; \Delta; 0; 1 each play two roles: applied to arbitrary elements
6 \Delta D. Kozen
of K, they refer to nondeterministic choice, composition, fail, and skip, respectively;
and applied to tests, they take on the additional meaning of Boolean disjunction,
conjunction, falsity, and truth, respectively. These two usages do not conflict-for
example, sequential testing of b and c is the same as testing their conjunction-and
their coexistence admits considerable economy of expression.
The encoding of the while program constructs is as in PDL [Fischer and Ladner
1979]:
if b then p else q
while b do p
For applications in program verification, the standard interpretation would be a
Kleene algebra of binary relations on a set and the Boolean algebra of subsets of
the identity relation. One could also consider trace models, in which the Kleene
elements are sets of traces (sequences of states) and the Boolean elements are sets
of states (traces of length 0). As with KA, one can form the algebra Mat(K; B; n)
of n \Theta n matrices over a KAT (K; B); the Boolean elements of this structure are the
diagonal matrices over B. There is also a language-theoretic model that plays the
same role in KAT that the regular sets of strings over a finite alphabet play in KA,
namely the family of regular sets of guarded strings over a finite alphabet \Sigma with
guards from a set B. This is the free KAT on generators \Sigma; B; that is, the equational
theory of this structure is exactly the set of all equational consequences of the KAT
axioms. Moreover, KAT is complete for the equational theory of relational models
[Kozen and Smith 1996].
3. KAT AND HOARE LOGIC
In this section we encode Hoare logic in KAT and derive the Hoare composition,
conditional, while, and weakening rules as theorems of KAT. We will strengthen
this result in Section 4 by showing that KAT can derive all relationally valid rules
of the form (8).
The PCA fbg p fcg is encoded in KAT by the equation
Intuitively, this says that the program p with preguard b and postguard c has no
halting execution. An equivalent formulation is
which says intuitively that testing c after executing bp is always redundant.
The equivalence of (22) and (23) can be argued easily in KAT. This equivalence
was previously observed by Manes and Arbib [Manes and Arbib 1986]. Assuming
(22),
c) by the axiom a and Boolean algebra
bpc by (22) and the axiom a
On Hoare Logic and Kleene Algebra with Tests \Delta 7
Conversely, assuming (23),
by associativity and Boolean algebra
by the axiom
The equation (23) is equivalent to the inequality bp - bpc, since the reverse
inequality is a theorem of KAT; it follows immediately from the axiom c - 1 of
Boolean algebra and monotonicity of multiplication.
Using (19)-(21) and (23), the Hoare rules (4)-(7) take the following form:
Composition rule:.
Conditional rule:.
While rule:.
Weakening rule:.
These implications are to be interpreted as universal Horn formulas; that is, the
variables are implicitly universally quantified. To establish the adequacy of the
translation, we show that (24)-(27) encoding the Hoare rules (4)-(7) are theorems
of KAT.
Theorem 3.1. The universal Horn formulas (24)-(27) are theorems of KAT.
Proof. First we derive (24). Assuming the premises
we have
bpcqd by (29)
bpqd by (28).
Thus the implication (24) holds.
For (25), assume the premises
Then
by commutativity of tests
bcqd by (30) and (31)
by commutativity of tests
bq)d by distributivity.
For (26), by trivial simplifications it suffices to show
Assume
By (12) we need only show
But
by (32) and monotonicity
- c(bp) c by (10).
Finally, for (27), we can rewrite the rule as
which follows immediately from the monotonicity of multiplication.
4. A COMPLETENESS THEOREM
Theorem 3.1 says that for any proof rule of PHL, or more generally, for any rule of
the form
derivable in PHL, the corresponding equational implication (universal Horn formula
is a theorem of KAT. In this section we strengthen this result to show (Corollary
4.2) that all universal Horn formulas of the form
that are relationally valid (true in all relational models) are theorems of KAT; in
other words, KAT is complete for universal Horn formulas of the form (34) over
relational interpretations. This result subsumes Theorem 3.1, since the Hoare rules
are relationally valid. Corollary 4.2 is trivially false for PHL; for example, the rule
On Hoare Logic and Kleene Algebra with Tests \Delta 9
is not derivable, since the Hoare rules only increase the length of programs.
In [Kozen and Smith 1996], based on a technique of Cohen [Cohen 1994] for KA,
we showed that a formula of KAT of the form (34) is valid over all models iff it is
valid over *-continuous models; moreover, its validity over either class of models is
equivalent to the validity of a pure equation. We strengthen this result by showing
that this equivalence still holds when models are further restricted to relational
models. The deductive completeness of KAT over relationally valid formulas of the
form (34) follows as a corollary.
Let T \Sigma;B denote the set of terms of the language of KAT over primitive propositions
and primitive tests g. Let r
. The formula (34) is equivalent
to Consider the four conditions
KAT ffl
It does not matter whether (38) is preceded by KAT, KAT , or REL, since the equational
theories of these classes coincide [Kozen and Smith 1996]. It was shown in
[Kozen and Smith 1996] that the metastatements (35), (36), and (38) are equivalent.
We wish to add (37) to this list.
The algebra G \Sigma;B of regular sets of guarded strings over \Sigma; B and the standard
were defined in [Kozen and Smith 1996]. We briefly
review the definitions here. An atom of B is a term of the form c 1
is either b i or b i . An atom represents an atom of the free Boolean algebra generated
by B. Atoms are denoted ff; guarded string over \Sigma; B is a term of the
where each p i 2 \Sigma and each fi i is an atom. This includes the case
are guarded strings. If xff; fiy are guarded strings and product is
xffy. If ff 6= fi, then the product does not exist. We can form the Kleene algebra
of all sets of guarded strings with operations
A
A n
fatoms of Bg:
This becomes a KAT by taking the Boolean algebra of tests to be the powerset
of the set 1. The map G is defined to be the unique homomorphic map on T \Sigma;B
extending
are atoms of Bg; a 2 \Sigma
denotes that b occurs positively in fi. The algebra G \Sigma;B is defined to be
the image of T \Sigma;B under the map G. It was shown in [Kozen and Smith 1996] that
G \Sigma;B is the free KAT on generators \Sigma; B in the sense that for any terms s; t 2 T \Sigma;B ,
Note that G(u) is the set of all guarded strings over \Sigma; B.
Theorem 4.1. The metastatements (35)-(38) are equivalent.
Proof. Since REL ' KAT ' KAT, the implications
trivially. Also, it is clear that
therefore as well. It thus remains to show that (37) ! (38). Writing
equations as pairs of inequalities, it suffices to show
To show (40), we construct a relational model R on states G(u) \Gamma G(uru). Note
that if x;
then we are done, since in that case G(p) ' G(u) ' G(uru) and the
right-hand side of (40) follows immediately from (39). Similarly, if G(1) ' G(uru),
then G(u) ' G(uuru) ' G(uru) and the same argument applies. We can therefore
assume without loss of generality that both G(u) \Gamma G(uru) and are
nonempty.
The atomic symbols are interpreted in R as follows:
The interpretations of compound expressions are defined inductively in the standard
way for relational models.
We now show that for any t 2 T \Sigma;B ,
by induction on the structure of t. For primitive programs a and tests b,
For the constants 0 and 1, we have
On Hoare Logic and Kleene Algebra with Tests \Delta 11
For compound expressions,
We now show (40). Suppose the left-hand side holds. By (41),
By the left-hand side of (40), R(p) ' R(q). In particular, for any x 2 G(p)\GammaG(uru),
G(uru). But this
It follows from (39) that the right-hand side of (40) holds.
Corollary 4.2. KAT is deductively complete for formulas of the form (34) over
relational models.
Proof. If the formula (34) is valid over relational models, then by Theorem 4.1,
holds. Since KAT is complete for valid equations,
But clearly
therefore
5. COMPLEXITY
As defined in Section 2.1, the decision problem for PHL is to determine whether
a given rule of the form (8) is valid over all relational interpretations. Note that
PSPACE-hardness does not follow immediately from the PSPACE-hardness of the
equational theory, since the conclusion fbg p fcg is of a restricted form
Indeed, E. Cohen has shown [Cohen 1999] that the complexity of valid equations
of the form in KAT is NP-complete.
Theorem 5.1. The decision problem for PHL is PSPACE-complete.
Proof. The reduction of Sections 3 and 4 transforms the decision problem for
PHL to the problem of the universal validity of Horn formulas of the form (34). As
shown in Section 4, this can be reduced to testing the validity of a single equation
without premises. The equational theory of KAT is decidable in PSPACE [Cohen
et al. 1996], thus the decision problem for PHL is in PSPACE.
We now show that the problem is PSPACE-hard. This holds even if the premises
are restricted to refer only to atomic programs, and even if they are
restricted to refer only to a single atomic program p. We give a direct encoding
of the computation of a polynomial space-bounded one-tape deterministic Turing
machine in an instance of the decision problem for PHL. Our approach is similar to
[Halpern and Reif 1983], using the premises fb i to circumvent the determinacy
assumption. E. Cohen [Cohen 1999] has given an alternative hardness proof
using the universality problem for regular expressions.
Consider the computation of a polynomially-space-bounded one-tape deterministic
Turing machine M on some input x of length n. Let N be a polynomial bound
on the amount of space used by M on input x. Let Q be the set of states of M , let
\Gamma be its tape alphabet, let s be its start state, and let t be its unique halt state. We
use polynomially many atomic propositional symbols with the following intuitive
meanings:
th tape cell currently contains symbol a,"
tape head is currently scanning the ith tape cell,"
"the machine is currently in state q," q 2 Q.
Let p be an atomic program. Intuitively, p represents the action of one step of M .
We will devise a set of assumptions OE that will say that faithfully models
the action of M . The PCA / will say that if started in state s on input x, the
program
while the current state is not t do p
fails. The PCA / will be a logical consequence of OE does not halt
on input x.
The start configuration of M on x consists of a left endmarker ' written on tape
cell 0, the input an written on cells 1 through n, and the remainder of
the tape filled with the blank symbol t out to the N th cell. The machine starts in
state s scanning the left endmarker. This situation is captured by the propositional
start
1-i-n
On Hoare Logic and Kleene Algebra with Tests \Delta 13
We will need a formula to ensure that M is in at most one state, that it is
scanning at most one tape cell, and that there is at most one symbol written on
each tape cell:
a6=b
We include the PCA
as one of the assumptions OE i to ensure that format is an invariant of p and therefore
preserved throughout the simulation of M .
Suppose the transition function of M says that when scanning a cell containing
symbol a in state p, M prints the symbol b on that cell, moves right, and enters
state q. We capture this constraint by the family of PCAs
All these PCAs are included for each possible transition of the machine; there are
only polynomially many in all.
We must also ensure that the symbols on tape cells not currently being scanned
do not change; this is accomplished by the family of PCAs
These are the assumptions OE our instance of the decision problem. It
is apparent that under any interpretation of p satisfying these PCAs, successive
executions of p starting from any state satisfying start - format move only to
states whose values for the atomic propositions S q , T i;a , and H i model valid configurations
of M , and the values change in such a way as to model the computation
of M . Thus there is a reachable state satisfying S t iff M halts on x.
We take as our conclusion / the PCA
fstart - formatg while :S t do p ffalseg;
which says intuitively that when started in the start configuration, repeatedly executing
will never cause M to enter state t. The PCA / is therefore a logical
consequence of OE does not halt on x.
ACKNOWLEDGMENTS
I thank Krzysztof Apt, Steve Bloom, Ernie Cohen, Zolt'an '
Esik, Joe Halpern, Greg
Morrisett, Moshe Vardi, and Thomas Yan for valuable comments on an earlier
version of this paper [Kozen 1999].
--R
Hypotheses in Kleene algebra.
Available as ftp://ftp.
Personal communication.
The complexity of Kleene algebra with tests.
Technical Report 96-1598 (July)
Regular Algebra and Finite Machines.
Soundness and completeness of an axiom system for program verification.
Methods and logics for proving programs.
Propositional dynamic logic of regular programs.
Algebraic Semantics of Imperative Programs.
Foundations of Computing.
The propositional dynamic logic of deterministic
An axiomatic basis for computer programming.
A semiring on convex polygons and zero-sum cycle problems
Representation of events in nerve nets and finite automata.
Shannon and J.
A completeness theorem for Kleene algebras and the algebra of regular events.
Kleene algebra with tests.
On Hoare logic and Kleene algebra with tests.
Kleene algebra with tests: Completeness and decidability.
Semantic models for total correctness and fairness.
Algebraic Approaches to Program Semantics.
Equational Logic as a Programming Language.
A practical decision method for propositional dynamic logic.
Theory of Comput.
Equational logic.
--TR
Equational logic as a programming language
Algebraic approaches to program semantics
Semantic models for total correctness and fairness
A semiring on convex polygons and zero-sum cycle problems
Methods and logics for proving programs
Floyd-Hoare logic in iteration theories
A completeness theorem for Kleene algebras and the algebra of regular events
Kleene algebra with tests
Effective Axiomatizations of Hoare Logics
An axiomatic basis for computer programming
Algebraic Semantics of Imperative Programs
Program Correctness and Matricial Iteration Theories
Kleene Algebra with Tests
A practical decision method for propositional dynamic logic (Preliminary Report)
The Complexity of Kleene Algebra with Tests
--CTR
Cohen , Dexter Kozen, A note on the complexity of propositional Hoare logic, ACM Transactions on Computational Logic (TOCL), v.1 n.1, p.171-174, July 2000
Dexter Kozen, Some results in dynamic model theory, Science of Computer Programming, v.51 n.1-2, p.3-22, May 2004
Dexter Kozen , Jerzy Tiuryn, Substructural logic and partial correctness, ACM Transactions on Computational Logic (TOCL), v.4 n.3, p.355-378, July
Bernhard Mller , Georg Struth, Algebras of modal operators and partial correctness, Theoretical Computer Science, v.351 n.2, p.221-239, 21 February 2006
J. von Wright, Towards a refinement algebra, Science of Computer Programming, v.51 n.1-2, p.23-45, May 2004
Jules Desharnais , Bernhard Mller , Georg Struth, Kleene algebra with domain, ACM Transactions on Computational Logic (TOCL), v.7 n.4, p.798-833, October 2006 | kleene algebra;hoare logic;dynamic logic;specification;kleene algebra with tests |
343386 | Locality of order-invariant first-order formulas. | A query is local if the decision of whether a tuple in a structure satisfies this query only depends on a small neighborhood of the tuple. We prove that all queries expressible by order-invariant first-order formulas are local. | Introduction
One of the fundamental properties of first-order formulas is their locality, which means
that the decision of whether in a fixed structure a formula holds at some point (or at a
tuple of points) only depends on a small neighborhood of this point (tuple). This result,
proved by Gaifman [5], gives a good intuition for the expressive power of first-order
logic. In particular, it provides very convenient proofs that certain queries cannot be expressed
by a first-order formula. For example, to decide whether there is a path between
two vertices of a graph it clearly does not suffice to look at small neighborhoods of these
vertices. Hence by locality, s-t-connectivity is not expressible in first-order logic. Re-
cently, Libkin and others [3, 8-10] systematically started to explore locality as tool for
proving inexpressibility results. The ultimate goal of this line of research would have
been to separate complexity classes, in particular to separate TC 0 , that is, the class of
languages that can be recognized by (uniform) families of bounded-depth circuits with
majority gates, from LOGSPACE. However, a recent result of Hella [7], showing that
even uniform AC 0 contains non-local queries, has destroyed these hopes.
Nevertheless, locality remains an important tool for proving inexpressibility results
for query languages. In database theory, one often faces a situation where the physical
representation of the database, which we consider as a relational structure, induces an
order on the structure, but this order is hidden to the user. The user may use the order
in her queries, but the result of the query should not depend on the given order. In other
words, the user may use the fact that some order is there, but since she does not know
which one she cannot make her query depend on any particular order. It may seem
that this does not help her, but actually there are first-order formulas that use the order
to express order-invariant queries that cannot be expressed without the order. This is an
unpublished result due to Gurevich [6]; for examples of such queries we refer the reader
to [1, 2] and Example 6 (due to [4]).
Formally, we say that a first-order formula '(-x) whose vocabulary contains the
order symbol - is order-invariant on a class C of structures if for all structures A 2 C,
tuples - a of elements of A, and linear orders - 1 , - 2 on A we have: '(-a) holds in
if, and only if, '(-a) holds in It is an easy consequence of the interpolation
theorem that if a formula is order-invariant on the class of all structures, it is equivalent
to a first-order formula that does not use the ordering. This is no longer true when
restricted to the class of all finite structures, or to a class consisting of a single infinite
structure. Unfortunately, these are the cases showing up naturally in applications to
computer science.
We prove that for all classes C of structures the first-order formulas that are order-
invariant on C can only define queries that are local on all structures in C. As for (pure)
first-order logic, this property of being local gives us a good intuition about the expressive
power of order-invariant first-order formulas and a simple method to prove
inexpressibility results.
The paper is organized as follows: After the preliminaries, we prove the locality
of order-invariant first-order formulas with one free variable in Section 3. This is the
crucial step towards our main result. In the following section we reduce the case of
formulas with arbitrarily many variables to the one-variable case.
We would like to thank Juha Nurmonen for pointing us to the problem and Clemens
Lautemann for fruitful discussions about its solution.
Preliminaries
A vocabulary is a set - containing finitely many relation and constant symbols. A -
structure A consists of a set A, called the universe of A, an interpretation R A ' A r
for each r-ary relation symbol R 2 - , and an interpretation c A 2 A of each constant
For example, a graph can be considered as an fEg-structure
E is a binary relation symbol.
An ordered structure is a structure whose vocabulary contains the distinguished
binary relation symbol - which is interpreted as a linear order of the universe.
denotes the set of integers.
Occasionally, we need to consider strings as finite structures. For each l - 1, we let
- l denote the vocabulary f-;
and constant symbols min and max. We represent a string over an l-letter
alphabet by the ordered - l -structure with universe [1; n], where P j
is interpreted as g, for every j, and In our notation we
do not distinguish between the string s and its representation as a finite structure s. For
a given l ? 0 we refer to such strings as l-strings.
If A is a structure and B ' A a subset that contains all constants of A, then the
(induced) substructure of A with universe B is denoted by hBi A .
Let oe ae - be vocabularies. The oe-reduct of a -structure A, denoted by Aj oe , is
the oe-structure with universe A in which all symbols of oe are interpreted as in A. On
the other hand, each -structure A such that Aj called a -expansion of B.
For a oe-structure B, relations R
the expansion of B of a suitable vocabulary
- oe oe that contains in addition to the symbols in oe a new k i -ary relation symbol for
each new constant symbols.
a class of -structures. A k-ary query on C is a mapping ae that
assigns a k-ary relation on A to each structure A 2 C such that for isomorphic -
structures each isomorphism f between A and B is also an isomorphism
between the expanded structures (A; ae(A)), (B; ae(B)). A Boolean (or 0-ary) query on
C is just a subclass of C that is closed under isomorphism.
2.1 Types and games
Equivalence in first-order logic can be characterized in terms of the following Ehren-
Definition 1. Let r - 0 and A; A 0 structures of the same vocabulary. The r-round EF-
game on A; A 0 is played by two players called the spoiler and the duplicator. In each of
the r rounds of the game the spoiler either chooses an element v i of A or an element v 0
of A 0 . The duplicator answers by choosing an element v 0
i of A 0 or an element v i of A,
respectively.
The duplicator wins the game if the mapping that maps v i to v 0
and each
constant c A to the corresponding constant c A 0
is a partial isomorphism, that is, an isomorphism
between the substructure of A generated by its domain and the substructure
of A 0 generated by its image.
It is clear how to define the notion of a winning strategy for the duplicator in the
game.
The quantifier-depth of a first-order formula is the maximal number of nested quantifiers
in the formula. The r-type of a structure A is the set of all first-order sentences of
quantifier-depth at most r satisfied by A. It is a well-known fact that for each vocabulary
- there is only a finite number of distinct r-types of -structures (simply because there
are only finitely many inequivalent first-order formulas of vocabulary - and quantifier-
depth at most r). We write A - r A 0 to denote that A and A 0 have the same r-type.
Theorem 2. Let r - 0 and A; A 0 structures of the same vocabulary. Then A - r A 0 if,
and only if, the duplicator has a winning strategy for the r-round EF-game on A; A 0 .
The following two simple examples, both needed later, may serve as an exercise for
the reader in proving non-expressibility results using the EF-game.
Example 3. Let r - 1 and Using the r-round EF-game, it is not hard to see
that the strings 1 have the same r-type. This implies, for example,
that the class f1 n cannot be defined by a first-order sentence.
Example 4. We may consider Boolean algebras as structures of vocabulary ft; u; :;
0; 1g. In particular, let P(n) denote the power-set algebra over [1; n]. It is not hard to
prove that for each r - 1 there exists an n such that P(n) - r P(n+ 1). Thus the class
eveng cannot be defined by a first-order sentence.
In some applications, it is convenient to modify the EF-game as follows: Instead of
choosing an element in a round of the game, the spoiler may also decide to skip the
round. In this case, v i and v 0
remain undefined; we may also write v
course undefined v i s are not considered in the decision whether the duplicator wins. It
is obvious that the duplicator has a winning strategy for the r-round modified EF-game
on A; A 0 if, and only if, she has a winning strategy for the original r-round EF-game on
2.2 Order invariant first-order logic
Definition 5. Let - be a vocabulary that does not contain - and C a class of -struc-
tures. A formula '(x vocabulary - [f-g is order-invariant on C if for all
linear orders of A we have
If ' is order invariant on the class fAg we also say that ' is order-invariant on A.
To simplify our notation, if a - [ f-g-formula '(-x) is order-invariant on a class
C of -structures and A 2 C, - a 2 A we write A j= inv '(-a) to denote that for some,
hence for all orderings - on A we have (A; -) '(-a). Furthermore, we say that '(-x)
defines the query A 7! f-a j A j= inv '(-a)g on C. 1 We can easily extend the definition
to Boolean queries.
Let us emphasize that, although order-invariant first-order logic sounds like a restriction
of pure first-order logic, it is actually an extension: There are queries on the
class of all finite structures that are definable by an order-invariant first-order formula,
but not by a pure first-order formula [6]. The following example can be found in [4].
Example 6. There is an order-invariant first-order sentence ' of vocabulary
that defines the query fP(n) j n eveng on the class of all finite
Boolean algebras. By Example 4, this query is not definable in first-order logic.
Similarly, if we let A be the disjoint union of all structures P(n), for n - 1, then
the unary query "x belongs to a component with an even number of atoms" on fAg is
definable by an order-invariant first-order formula, but not by a plain first-order formula.
2.3 Local formulas
Let A be a -structure. The Gaifman graph of A is the graph with universe A where
are adjacent if there is a relation symbol R 2 - and a tuple - c such that R A - c
and both a and b occur in - c.
The distance d A (a; b) between two elements is defined to be the length of
the shortest path from a to b in the Gaifman graph of A; if no such path exists we let
d A (a; 1. The ffi-ball around a 2 A is defined to be the set B A
d A (a; b) - ffig, and the ffi-sphere is the set S A
ffig. If A is
clear from the context, we usually omit the superscript A .
For sets B; C ' A we let d(B;
(b). For tuples - a = a
let
1 This is ambiguous because '(-x) also defines a query on the class of all - [ f-g-structures.
But if we speak of a query defined by an order-invariant formula, we always refer to the query
defined in the text.
Definition 7. (1) A k-ary query ae on a class C is local if there exists a - 0 such that
for all A 2 C and -
we have
The least such - is called the locality rank of ae.
that is order-invariant on a class C is local, if the query it defines
is local. The locality rank of '(-x) is the locality rank of this query.
It should be emphasized that, in the definition of local order-invariant formulas, neither
the isomorphisms nor the distance function refer to the linear order.
Gaifman [5] has proved that first-order formulas can only define local queries.
3 Locality of invariant formulas with one free variable
In this section we are going to show that if a first-order formula with one free variable
is order-invariant on a class C of structures then it is also local on C. Before we formally
state and prove this result, we need some preparation.
Lemma 8. For all l; r 2 N there are m;n 2 N such that for all l-strings s of size at
least n there are unary relations P and P 0 on s such
Proof. Let l; r 2 N be fixed and t the number of r-types of vocabulary - l . We let
choose n large enough such that whenever the edges of a complete
graph with n vertices are colored with t colors, there is an induced subgraph of size
of whose edges have the same color.
be an l-string of length n 0 - n. For
denote the l-substring s
For we color the pair fi; jg (that is, the edge fi; jg of the complete graph
on [1; n]) with the r-type of (the representation of) hi; ji. By the choice of n we find
such that all structures hp
have the same r-type. We let g.
We claim that (s; P Intuitively, we prove this claim by carrying over a
winning strategy for the duplicator on the strings
our structures. Recall from Example 3 that such a strategy exists.
Formally, we proceed as follows: We define a mapping f : [1;
by
Consider the r-round EF-game on (s; P As usual, let v i and v 0
i be the elements
chosen in round i. It is not too difficult to prove, by induction on i, that the duplicator
can play in such a way that for every i - r one of the following conditions holds:
and the following two subconditions hold:
(a) The duplicator has a winning strategy for the (r \Gamma i)-round modified EF-game
on (u; f(v 1
(b) The duplicator has a winning strategy for the (r \Gamma i)-round modified EF-
game on (hp f(v i
is the identity on hp f(v i
else and g 0 is the identity on hp f(v 0
Clearly, this implies the claim and thus the statement of the lemma. 2
Lemma 9. If a first-order formula '(x) is order-invariant on a class C of structures
then it is local on C.
Proof. Let '(x) be a first-order formula of quantifier-depth r that is order-invariant on
a class C of -structures.
Let l 0 be the number of different r-types of vocabulary - [
the Q i are new unary relation symbols and let l := l 0
2 . Let m and n be given by Lemma
8 above w.r.t. r and l. Let - := n(2 r
- (b) via an isomorphism -.
Our goal is to show that there are linear orders - 1 and - 2 on A such that
b). From this we can conclude
A
In order to prove the existence of such linear orders, we first show that, w.l.o.g., we can
assume the following.
There is a set W ' fa; bg, and an automorphism ae on hB - (W )i such
that
To show this, we distinguish the following two cases.
2-. In this case we simply set W := fa; bg and define ae by
Case 2: Assume first that d(a; - i (a)) ? 4-, for some i ? 0.
Then we also have d(b; - i (a)) ? 2-. Furthermore, by the choice of -, B - (a)
(b). We can conclude from the proof given below that
A
If, on the other hand, d(a; every i, we set
Hence, we can assume (*). In the following we only make use of B - (a)
opposed to B - (a)
It is easy to see that every sphere S i (W ) is a disjoint union of orbits of ae, i.e. a
disjoint union of sets of the form We fix, for every
some linear order of the orbits of the sphere S i (W ). Next we fix a preorder OE on A
with the following properties.
- OE is a linear order on
are in the same sphere S i (W ) but the
orbit of c comes before the orbit of c in the order of the orbits that was chosen
above, and
- c and c 0 are not related with respect to OE, whenever c; c are
in the same orbit.
Both linear orders - 1 and - 2 will be refinements of OE. They will only differ inside
some of the orbits.
We can assume that no sphere S i (W ), with i -, is empty. Otherwise, B - (W )
would be a union of connected components of A, hence we could fix any linear order -
on the orbits of B - (W ) and define - 1 by combining - with OE and - 2 by combining
the image of - under ae with OE.
For each orbit O, we fix a vertex v(O) and define a linear order - 0 on O by
O is finite and by \Delta
ae O is infinite. For every
k, we denote by - k the image of - 0 under ae k . It is easy to see that (S i (W
To catch the intuitive idea of the proof, the reader should picture the spheres S i (W )
(for as a sequence of concentric cycles, W itself being innermost. Outside
these cycles is the rest of the structure A, fixed once and for all by the order OE. The
automorphism ae is turning the cycles, say, clockwise. In particular, it turns the cycle W
far enough to map a to b. Each cycle is ordered clockwise by - 0 . The ordering - k is the
result of turning the cycle k-steps. (Unfortunately, all this is not exactly true, because
usually the orbits do not form whole spheres. They may form small cycles or "infinite
cycles". But essentially it is the right picture.)
To define the orders - 1 and - 2 we proceed as follows. On W we let - 1 =- 0 and
looks from a as - 2 looks from b, and this is how it should
be. On the other hand, on the outermost cycle S - (W ) both orderings should be the
same, because the outside structure is fixed. So we let - 1 =- 2 =- m on S - (W ) (for the
fixed in the beginning of the proof). Now we determine two sequences
already know that on . For all
on S j (W ) but once we reach j 1 we turn it one step. That is, we let - 1 =- 1 on S j1 (W ).
We stick with this, until we reach S j2 (W ), and there we turn again and let - 1 =- 2 . We
go on like this, and after the last turn at S jm (W ) we have - 1 =- m , and that is what
we wanted. Similarly, we define - 2 by starting with - 1 and taking turns at all spheres
Again we end up with - 2 =- m on all spheres S k (W ) for
But of course the turns can be detected, so how can we hide that we took one more
turn in defining - 1 ? The idea is to consider the sequence of spheres as a long string,
whose letters are the types of the spheres. The positions where a turn is taken can
be considered as a unary predicate on this string. By Lemma 8, we can find unary
predicates of sizes m and respectively, such that the expansions of our string by
these predicates are indistinguishable. This is exactly what we need.
Essentially, this is what we do. But of course there are nasty details
1. For every i with
the
substructure of A that is induced by the spheres S ih\Gammaj (W
let, for every be the structure T i
Let the linear order - j on T i be defined by combining the orders - j on the spheres
of T i with OE. Finally let E j be the linear order on T i that is obtained by combining OE
with for the spheres S q (W ) with q - ih, and with - j+1 for the spheres S q (W )
with q ? ih. For every
For every i, we define the unary relations Q
S ic+j (W ), i.e., a vertex v is in Q j , if its distance from the central sphere in T i is j.
Now we define an l-string l be an enumeration
of all pairs of r-types of - [ g-structures. We set s
the pair (r-type of (T i
By Lemma 8 and our choice of the parameters l; there exist unary relations
and the duplicator has a winning strategy in
the r-round game on (s; P ) and (s; P 0 ). Now we are ready to define the linear orders - 1
and - 2 on A. For every i, let
- 1 is defined on T i as - u(i) , if i 62 P and as E u(i) , if
is defined on T i as - u(i) , if
Observe that, although T i and T i+1 are not disjoint, these definitions are consistent.
It remains to show that the duplicator has a winning strategy in the r-round game
on b). The winning strategy of the duplicator will be obtained
by transferring the winning strategy on (s; P ) and (s; P 0 ), making use of the gap preserving
technique that was invented in [11].
For every fi; fl, with we define a function f fi;fl from
A to
if x is in T i
We are going to show that the duplicator can play in such a way that for every i the
following conditions hold.
(1) There exist fi; such that all vertices
are in some T q
(fi;fl) , in one of S
that between successive super-spheres there is a gap of 2 r\Gammai spheres that do not
contain any chosen vertices).
(2) The duplicator has an (r \Gamma i)-round winning strategy in the modified game on
(s;
(3) For every
fi;fl then the duplicator has a (r\Gammai)-round winning strategy
in the modified game on the structures (T f fi;fl (v j )
(4) For every
(5) For every
We refer to elements elements to elements of B -\Gammafi (W
as middle elements and to the others as outer elements.
First, we show that we can conclude from these conditions that the duplicator has
a winning strategy. Let v
r be the elements that were chosen
during the game. Let j; k - r. We have to show that
(a) a if and only if v 0
(b) the mapping a 7! b, v i 7! v 0
r) is a partial isomorphism,
(c)
k .
(a) follows immediately from (5). Remember the definition of the spheres in A. It
implies that only elements of the same sphere or of succeeding spheres are related by
a relation of A. Hence if v j and v k are not of the same group of elements (i.e., inner,
middle or outer) then (b) follows immediately because (1) ensures by the properties of fi
and fl and (3) - (5) ensure that v 0
k are in the same group as v j and v k , respectively.
(c) follows for similar reasons.
are both middle (outer, inner) elements then (b) and (c) follow immediately
from (2) and (3) (respectively (4), (5)).
It remains to show, by induction on i, that (1)-(4) hold, for every i - r.
For are immediate and (2) holds by
Lemma 8.
Now be true for w.l.o.g., the spoiler have selected
a vertex v i in (A; - 1 ). (The case where he chooses v 0
i is completely analogous, as
conditions (1) to (5) are symmetric.) Let fi denote the values of fi and fl that are
obtained from (1) for We distinguish the following cases.
. In this case, we can choose
immediately hold by induction.
In this case, we also choose . There are 2
subcases.
By induction, there is an element z of
s such that the duplicator has a winning strategy in the (r \Gamma i) round game
on (s;
In
as have the same
r-type. As f fi;fl (v i only if z 2 P 0 , either the two substructures
have linear orders of type - j and - j 0
for some j; j 0 or they have linear orders
of type
for some j; j 0 . In either case there exists an element v 0
in
T z such that (3) holds. By the choice of z, (2) also follows. (1) holds as v i
and v 0
are in the same Q p . (4) holds because for the outer structure nothing
has changed. Finally, (5) still holds, as B fl (W ) is not affected, either.
By (3) of the induction hypothesis
the duplicator has a (r winning strategy in the modified
game on the structures (T f fi;fl (v j )
hence there is a v 0
i such that she still has a (r \Gamma i)-round winning strategy
in the modified game on the structures (T f fi;fl (v j )
and (T f fi;fl (v 0
This implies (3). (1), (2), (4) and
immediately.
lies
in a former gap). There are 2 subcases.
In this case, we choose
In this case, we choose
The existence of an appropriate v 0
i follows in both cases analogous to (ii), as condition
(3) ensures the existence of a winning strategy also for a buffer zone of
This case can be handle
as the second subcase of (iii).
. In this case, fi and fl are chosen in
the same way as in the first subcase of (iii) and v 0
i is chosen as ae(v i ). Hence, (1) -
hold.
Simply choose v 0
immediately,
In all cases fi
Remark 10. For later reference, let us observe that the lemma implies the statement
that the locality rank of '(x) on C is bounded by a function of the vocabulary and
quantifier-depth of '(x). More precisely, for each vocabulary - and r - 0 there is a
-; r) such that the following holds: If a first-order formula '(x) of vocabulary
- and quantifier-depth at most r is order-invariant on a class C of -structures, then it is
local on C with locality rank at most -.
(To see that this follows from the Lemma, let C be the class of all structures A
such that '(x) is order-invariant on A and remember that there are only finitely many
first-order formulas of vocabulary - and quantifier-depth at most r.)
4 Locality of invariant formulas with arbitrarily many free
variables
Lemma 11. Let - be a vocabulary and r - 0; k - 1. Then there exists a -;
such that the following holds: If '(x is a first-order formula of vocabulary
- and quantifier-depth at most r that is order-invariant on a -structure A, then for all
we have
A
Proof. We first give a sketch of the proof.
The proof is by induction on k. For the lemma just restates the locality of
order-invariant first-order formulas with one free variable, proved in Lemma 9.
For k ? 1, we assume that we have k-tuples - a, - b in A such that all the a i ; a j and
are far apart (as the hypothesis of the Lemma requires) and we have an isomorphism
for a sufficiently large -. We prove that - a and - b cannot
be distinguished by order-invariant formulas of vocabulary - and quantifier-depth
at most r.
We distinguish between three cases:
The first is that some b i , say, b k , is far away from - a. Then we can treat a
as constants and apply Lemma 9 to show that a k and b k cannot be distinguished in the
expanded structure (A; a (Here we use the hypothesis d(a
1). Then we treat b k as a constant and apply the induction hypothesis to
prove that the cannot be distinguished in the
expanded structure (A; b k ). (This requires our hypothesis that
The second case is similar, we assume that for some h - 1 the iterated partial
isomorphism - h maps some a i far away from - a. Then we first show that - a and - h (-a)
cannot be distinguished and then that - h (-a) and - b cannot be distinguished.
The third case is that for all h - 1 the entire tuple - h (-a) is close to - a. Then some
restriction of - is an automorphism of a substructure of A that maps - a to - b. We can
modify this substructure in such a way that the tuples - a and - b can be encoded by single
elements and then apply Lemma 9.
Now we describe the proof in more detail. As noted before, we prove the lemma by
induction on k. For it follows from Lemma 9, recalling Remark 10 to see that
r) is a function of - and r.
suppose that the statement of the lemma is proved for all
Let - be a vocabulary and r - 0. Let -
binary relation symbols and d
not contained in - . Let -
Let A be a -structure and - a = a
2- and d(b
We shall prove that
A
Let - be an isomorphism between hB -a)i A and hB - b)i A .
1: There is an i - k such that for all j - k we have
Without loss of generality we can assume that k has this property. Then in the
Here we use the hypothesis that
Note that the formula is order-invariant on the structure
Since we can assume that - is greater than or
equal induction hypothesis we have
A
Next, note that in the - [ fd 1 g-structure
Here we use the hypothesis that
A similar argument as above shows that
A
(4) and (5) imply (2).
CASE 2: Case 1 does not hold, and there is a z 2 Z and an i - k such that for all
We choose z with this property such that jzj is least possible For
Again we assume, without loss of generality, that for all j - k we have
This suffices to prove, as in Case 1, that
A
and a similar argument shows
that
A
This again yields (2).
CASE 3: For all z 2 Z and i - k there is a j - k such that d(- z (a i ); a
Note that for all z 2 Z the domain of - contains B 2- z (a 1
the domain of - is B 6-a) and - z (a j
is an automorphism of the substructure
hBi A .
binary relation symbols not contained in - . We expand A to a
1. Note that - remains an automorphism of hBi E . Thus
: For all z 2 Z we
have
Furthermore, / is order-invariant on E. Hence by our choice of - and (6) we have
and thus (2).Theorem 12. Every first-order formula that is order-invariant on a class C of structures
is local on C.
Proof. Again we first give a sketch of the proof.
The proof is by induction on the number k of free variables of a formula. We have
already proved that formulas with one free variable are local.
be invariant on C, A 2 C, and - a, - b 2 A k such that hB -a)i A
for a sufficiently large -. Either all the a i ; a j and b are far apart, then we
can apply Lemma 11, or some of them are close together. In the latter case, we define
a new structure where we encode pairs of elements of A that are close together by
new elements. This does not spoil the distances too much, and we can encode our k-tuples
by smaller tuples that still have isomorphic neighborhoods. On these we apply
the induction hypothesis.
More formally, we prove the following statement by induction on k: Let - be a vocabulary
and r - 0; k - 1. Then there is a -; such that for all first-order formulas
vocabulary - and quantifier depth at most r and all -structures
A we have: If ' is order-invariant on A, then ' is local on A with locality-rank at most
For this follows from Lemma 9 (cf. Remark 10). So suppose it is proved for
all be a first-order formula of vocabulary - and quantifier-
rank r that is order invariant on a -structure A.
We choose -; according to the Lemma 11. Let R 1 binary relation
symbols not contained in - and - g. We let -
Let B be the - 0 -structure obtained from A by adding a new vertex
2-, an R 1 -edge from b(a 1 ; a 2 ) to a 1 , and an R 2 -edge
from b(a 1 ; a 2 ) to a 2 . Note that for all a; b 2 A we have
For
Then for all a
A
Furthermore, / ij is order-invariant on B. Thus by our induction hypothesis, it is local
on B with locality rank at most - 0 .
k such that
If d A (a i ; a we have
A
by Lemma 11.
So without loss of generality we can assume that d(a 1 ; a 2 ) - 2-. Since 2-, by
we also have d A (b Consider the structure B(A). By (7), for all
a 2 A we have
(a). Hence
by (9) and the definition of B. Thus
which implies
A
by (8). 2
5 Further research
The obvious question following our result is: What else can be added to first-order logic
such that it remains local. Hella [7] proved that invariant first-order formulas that do not
only use an order, but also addition and multiplication, are not local. On the other hand,
we conjecture that just adding order and addition does not destroy locality.
However, the fact that invariant formulas with built-in addition and multiplication
are not local is more relevant to complexity theory, since first-order logic with built-in
addition and multiplication captures uniform AC 0 . One way to apply locality techniques
to complexity theoretic questions in spite of Hella's non-locality result is to
weaken the notion of locality. For example, it is conceivable that all invariant AC 0 or
even -queries are local in the sense that if two points of a structure of size n have
isomorphic neighborhoods of radius O(log n), then they are indistinguishable.
This would still be sufficient to separate LOGSPACE from these classes.
--R
Foundations of Databases.
Extended order-generic queries
Local properties of query languages.
Finite Model Theory.
On local and non-local properties
Private communication.
Private communication.
Notions of locality and their logical characterizations over finite models
On forms of locality over finite models.
On counting and local properties.
Graph connectivity and monadic NP.
--TR
Logics with counting and local properties
Foundations of Databases
Local Properties of Query Languages
Deciding First-Order Properties of Locally Tree-Decomposalbe Graphs
On the Forms of Locality over Finite Models
Logics with Aggregate Operators
Logics with Counting, Auxiliary Relations, and Lower Bounds for Invariant Queries
--CTR
David Gross-Amblard, Query-preserving watermarking of relational databases and XML documents, Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.191-201, June 09-11, 2003, San Diego, California
Leonid Libkin, Expressive power of SQL, Theoretical Computer Science, v.296 n.3, p.379-404, 14 March
Leonid Libkin, Logics capturing local properties, ACM Transactions on Computational Logic (TOCL), v.2 n.1, p.135-153, Jan. 2001
Guozhu Dong , Leonid Libkin , Limsoon Wong, Incremental recomputation in local languages, Information and Computation, v.181 n.2, p.88-98, March 15,
Nicole Schweikardt, On the expressive power of monadic least fixed point logic, Theoretical Computer Science, v.350 n.2, p.325-344, 7 February 2006
Lane A. Hemaspaandra, SIGACT news complexity theory column 49, ACM SIGACT News, v.36 n.4, December 2005 | logics;first-order logic;ordered structures;locality |
343442 | Statement-Level Communication-Free Partitioning Techniques for Parallelizing Compilers. | This paper addresses the problem of communication-free partition of iteration spaces and data spaces along hyperplanes. To finding more possible communication-free hyperplane partitions, we treat statements within a loop body as separate schedulable units. Instead of using the information about data dependence distance or direction vectors, our technique explicitly formulates array references as transformations from statement-iteration spaces to data spaces. Based on these transformations, the necessary and sufficient conditions for communication-free partition along hyperplanes to be feasible have been proposed. This approach can be applied to all programs with an imperfectly nested loop or sequences of imperfectly nested loops, whose array references are affine functions of outer loop indices or loop invariant variables. The proposed approach is more practical than existing methods in finding the data and computation distribution patterns that can cause the processor to execute fully-parallel on multicomputers without any interprocessor communication. | Introduction
It has been widely accepted that local memory access is much faster than memory access involving
interprocessor communication on distributed-memory multicomputers. If data and computation
are not properly distributed across processors, it may cause heavy interprocessor communication.
Although the problem of data distribution is of critical importance to the efficiency of the parallel
program in distributed memory multicomputers; it is known to be a very difficult problem. Mace
[14] has proved that finding optimal data storage patterns for parallel processing is NP-complete,
even when limited to one- and two-dimensional arrays . In addition, Li and Chen [11, 12] have
shown that the problem of finding the optimal data alignment is also NP-complete.
Thus, in the previous work, a number of researchers have developed parallelizing compilers
that need programmers to specify the data storage patterns. Based on the programmer-specified
data partitioning, parallelizing compilers can automatically generate the parallel program with
appropriate message passing constructs for multicomputers. Projects using this approach include
the Fortran D compiler project [4, 5, 18], the SUPERB project [21], the Kali project [9, 10],
and the DINO project [17]. For the same purpose, the Crystal project [11, 12] and the
compiler [16] deal with functional languages and generate the parallel program with message passing
construct. The parallel program generated by most of these systems is in SPMD (Single-Program
Multiple Data) [8] model.
Recently, automatic data partitioning is an attractive research topic in the field of parallelizing
compilers. There are many researchers that develop systems to help programmers deal with the
problem of data distribution by automatically determining the data distribution at compile time.
The PARADIGM project [3] and the SUIF project [1, 19] are all based on the same purpose. These
systems can automatically determine the appropriate data distribution patterns to minimize the
communication overhead and generate the SPMD code with appropriate message passing constructs
for distributed memory multicomputers.
Since excessive interprocessor communication will offset the benefit of parallelization even if
the program has a large amount of parallelism, consequently, parallelizing compilers must pay
more attention on the distribution of computation and data across processors to reduce the communication
overhead or to completely eliminate the interprocessor communication, if possible.
Communication-free partitioning, therefore, becomes an interesting and worth studying issue for
distributed-memory multicomputers. In recent years, much research has been focused on the area
of partitioning iteration spaces and/or data space to reduce interprocessor communication and
achieve high-performance computing.
Ramanujam and Sadayappan [15] consider the problem of communication-free partitioning of
data spaces along hyperplanes for distributed memory multicomputers. They present a matrix-based
formulation of the problem for determining the existence of communication-free partitions of
data arrays. Their approach proposes only the array decompositions and does not take the iteration
space partitionings into consideration. In addition, they concentrate on fully parallel nested loops
and focus on two-dimensional data arrays.
Huang and Sadayappan [7] generalize the approach proposed in [15]. They consider the issue
of communication-free hyperplane partitioning by explicitly modeling the iteration and data
spaces and provide the conditions for the feasibility of communication-free hyperplane partitioning.
However, they do not deal with imperfectly nested loops. Moreover, the approach is restricted to
loop-level partitioning, i.e., all statements within a loop body must be scheduled together as an
indivisible unit.
Chen and Sheu [2] partition iteration space first according to the data dependence vectors
obtained by analyzing all the reference patterns in a nested loop, and then group all data elements
accessed by the same iteration partition. Two communication-free partitioning strategies, non-duplicate
data and duplicate data strategies, are proposed in this paper. Nevertheless, they require
the loop contain only uniformly generated references and the problem domain be restricted to a
single perfectly nested loop. They also treat all statements within a loop body as an indivisible
unit.
Lim and Lam [13] use affine processor mappings for statements to assign the statement-iterations
to processors and maximize the degree of parallelism available in the program. Their approach
does not treat the loop body as an indivisible unit and can assign different statement-iterations to
different processors. However, they consider only the statement-iteration space partitioning and
do not address the issue of data space partitioning. Furthermore, their uniform affine processor
mappings can cause a large number of idle processors if the affine mappings are non-unimodular
transformations.
In this paper, communication-free partitioning of statement-iteration spaces and data spaces
along hyperplanes are considered. We explicitly formulate array references as transformations from
statement-iteration spaces to data spaces. Based on these transformations, we then present the
necessary and sufficient conditions for the feasibility of communication-free hyperplane partitions.
Currently, most of the existing partitioning schemes take an iteration instance as a basic schedulable
unit that can be allocated to a processor. But, when the loop body contains multiple statements, it
is very difficult to make the loop be communication-freely executed by allocating iteration instances
among processors. That is, the chance of communication-free execution found by using these
methods is limited. For having more flexible and possible in finding communication-free hyperplane
partitions, we treat statements within a loop body as separate schedulable units. Our method does
not consider only one of the iteration space and data space but both of them. As in [13], our
method can be extended to handle more general loop models and can be applied to programs with
imperfectly nested loops and affine array references.
The rest of the paper is organized as follows. In Section 2, we introduce notation and terminology
used throughout the paper. Section 3 describes the characteristics of statement-level
communication-free hyperplane partitioning. The technique of statement-level communication-free
hyperplane partitioning for a perfectly nested loop is presented in Section 4. The necessary and
sufficient conditions for the feasibility of communication-free hyperplane partitioning are also given.
The extension to general case for sequences of imperfectly nested loops is described in Section 5.
Finally, the conclusions are given in Section 6.
Preliminaries
This section explains the statement-iteration space and the data space. It also defines the statement-
iteration hyperplane and the data hyperplane.
2.1 Statement-Iteration Space and Data Space
Let Q, Z and Z + denote the set of rational numbers, the set of integers and the set of positive
integer numbers, respectively. The symbol Z d represents the set of d-tuple of integers. Traditionally,
the iteration space is composed of discrete points where each point represents the execution of all
statements in one iteration of a loop [20]. Instead of viewing each iteration indivisible, an iteration
can be divided into the statements that are enclosed in the iteration, i.e., each statement is a
schedulable unit and has its own iteration space. We use another term, statement-iteration space,
to denote the iteration space of a statement in a nested loop.
The following example illustrates the notion of iteration spaces and statement-iteration spaces.
Example 1: Consider the following nested loop L 1 .
do
do
Fig. 1 illustrates the iteration space and statement-iteration spaces of loop L 1 for 5.
In Fig. 1(a), a circle means an iteration and includes two rectangles with black and gray colors.
The black rectangle indicates statement s 1 and the gray one indicates statement s 2 . In Fig. 1(b)
and Fig. 1(c), each statement is an individual unit and the collection of statements forms two
statement-iteration spaces. 2
The representation of statement-iteration spaces, data spaces and the relations among them is
described as follows. Let S denote the set of statements in the targeted problem domain and D be
the set of array variables that are referenced by S. Consider statement s 2 S, which is enclosed in
a d-nested loop. The statement-iteration space of s, denoted by SIS(s), is a subspace of Z d and
is defined as I i is the loop index
variable, LB i and UB i are the lower and upper bounds of the loop index variable I i , respectively.
The superscript t is a transpose operator. The column vector I is called a
statement-iteration in statement-iteration space SIS(s), LB i - I i - UB i , for On
the other hand, from the geometric point of view, an array variable also forms a space and each
array element is a point in the space. For exactly describing an array variable, we use data space
to represent an n-dimensional array v, which is denoted by DS(v), where v 2 D. An array element
has a corresponding data index in the data space DS(v). We denote this data
index by a column vector D
The relations between statement-iteration spaces and data spaces can be built via array reference
functions. An array reference function is a transformation from statement-iteration space into data
Figure
1: Loop (L 1 )'s iteration space and its corresponding statement-iteration spaces, assuming
5. (a) IS(L 1 ), iteration space of loop (L 1 ). (b) SIS(s 1 ), statement-iteration space of statement
statement-iteration space of statement s 2 .
space. As most of the existing methods, we require the array references be affine functions of outer
loop indices or loop invariant variables. Suppose statement s is enclosed in a d-nested loop and
has an array reference pattern v[a 1;1 I 1
a a i;j are integer constants, for 1 - i - n and
d, then the array reference function can be written as:
where
F s;v =6 4
a 1;1 \Delta \Delta \Delta a 1;d
a
a 1;0
a
We term F s;v the array reference coefficient matrix and f s;v the array reference constant vector.
If data index D v 2 DS(v) is referenced in statement-iteration I s 2 SIS(s), then Ref s;v
Take the array reference pattern as an example. The array
reference coefficient matrix and constant vector of A[i are F
and f
\Gamma4#
, respectively.
We define statement-iteration hyperplanes and data hyperplanes in the next subsection.
2.2 Statement-Iteration Hyperplane and Data Hyperplane
A statement-iteration hyperplane on statement-iteration space SIS(s), denoted by \Psi(s), is a hyperspace
[6] of SIS(s) and is defined as \Psi h
are the coefficients of the statement-iteration hyperplane and c h 2 Q is
the constant term of the hyperplane. The formula can be abbreviated as \Psi h
is the statement-iteration hyperplane coefficient vector. Similarly, a data
hyperplane on data space DS(v), denoted by \Phi(v), is a hyperspace of DS(v) and is defined as
are the
coefficients of the data hyperplane and c g 2 Q is the constant term of the hyperplane. In the same
way, the formula also can be abbreviated as \Phi
the data hyperplane coefficient vector. The hyperplanes that include at least one integer point are
considered in this paper.
Statement-iteration hyperplanes and data hyperplanes are used for characterizing communica-
tion-free partitioning. We discuss some of these characteristics in the next section.
3 Characteristics of Communication-Free Hyperplane Partition-
ing
A program execution is communication-free if all operations on each of all processors access only
data elements allocated to that processor. A trivial partition strategy allocates all statement-
iterations and data elements to a single processor. The program execution of this trivial partitioning
is communication-free. However, we are not interested in this single processor program execution
because it does not exploit the potential of parallelization and it conflicts with the goal of parallel
processing. Hence, in this paper, we consider only nontrivial partitioning, in specific, hyperplane
partitioning.
The formal definition of communication-free hyperplane partition is defined as below. Let
partition group, G,
be the set of hyperplanes that should be assigned to one processor. The definition of communica-
tion-free hyperplane partition can be given as the following.
1 The hyperplane partitions of statement-iteration spaces and data spaces are said to
be communication-free if and only if for any partition group
above, the statement-iterations which access the same array element should
be allocated to the same statement-iteration hyperplane. Therefore, it is important to decide
statement-iterations that access the same array element. The following lemma states the necessary
and sufficient condition that two statement-iterations will access the same array element.
Lemma 1 For some statement s 2 S and its referenced array v 2 D, I s and I 0
s are two statement-
iterations on SIS(s) and Ref s;v is the array reference function from SIS(s) into DS(v) as defined
above. Then
where Ker(S) denotes the null space of S [6].
Proof. ()): Suppose that Ref s;v
Thus
Conversely, suppose that (I 0
be a basis of Ker(F s;v ), then vectors belonged to Ker(F s;v ) can be represented
as a linear combination of vectors in fff 1 g. Since (I 0
Thus Ref s;v
We using the following example.
Example 2: Consider the array reference A[i j]. The array reference coefficient ma-
. The null space of F s;A is Ker(F s;A Zg. By Lemma 1, any
two statement-iterations with the difference of r[1; \Gamma1] t will access the same array element, where
Z. As Fig. 2 shows, the statement-iterations f(1; 3); (2; 2); (3; 1)g all access the same array
element A[4; 4]. 2
We explain the significance of Lemma 1 and show how this lemma can help to find com-
munication-free hyperplane partitions. Communication-free hyperplane partitioning requires those
statement-iterations that access the same array element be allocated to the same statement-iteration
hyperplane. According to Lemma 1, two statement-iterations access the same array element if
and only if the difference of these two statement-iterations belongs to the kernel of F s;v . Hence,
should be a subspace of the statement-iteration hyperplane. Since there may exist many
different array references, partitioning a statement-iteration space must consider all array references
appeared in the statement. Thus, the space spanned from Ker(F s;v ) for all array references
appearing in the same statement should be a subspace of the statement-iteration hyperplane. The
Figure
2: Those statement-iterations whose differences are in Ker(F s;v ) will access the same array
element.
dimension of a statement-iteration hyperplane is one less than the dimension of the statement-
iteration space. If there exists a statement s such that the dimension of the spanning space of
equal to the dimension of SIS(s), then the spanning space cannot be a subspace of
a statement-iteration hyperplane. Therefore, there exists no nontrivial communication-free hyper-plane
partitioning. From the above observation, we obtain the following theorem.
Theorem 1 If 9s 2 S such that
dim(span([ v2D Ker(F s;v
then there exists no nontrivial communication-free hyperplane partitioning for S and D. 2
Example 3: Consider matrix multiplication.
do
do
do
s:
In the above program, there are three array variables, A, B, and C, with three distinct array
references involved in statement s. The three array reference coefficient matrices, F s;A , F s;B ,
and F s;C , are
, and
, respectively. Thus, Ker(F s;A
which has the same dimensionality as the statement-
iteration space. By Theorem 1, matrix multiplication has no nontrivial communication-free hyper-plane
partitioning. 2
Theorem 1 can be useful for determining nested loops that have no nontrivial communica-
tion-free hyperplane partitioning. Furthermore, when a nontrivial communication-free hyperplane
partitioning exists, Theorem 1 can also be useful for finding the hyperplane coefficient vectors. We
state this result in the following corollary.
Corollary 1 For any communication-free statement-iteration hyperplane \Psi h
the following two conditions must hold:
denotes the orthogonal complement space of S.
Proof. By Lemma 1, two statement-iterations access the same data element using array reference
F s;v if and only if the difference between these two statement-iterations belongs to the kernel of
F s;v . Therefore, the kernel of F s;v should be contained in the statement-iteration hyperplane,
(s). The fact should be true for all array references appeared in the same statement. Hence,
(s). The first condition is obtained.
is the normal vector of \Psi h (s). That is, \Delta t is orthogonal to
By condition (1), it implies that \Delta t is orthogonal to the subspace span([ v2D Ker(F s;v )).
Thus, belongs to the orthogonal complement of span([ v2D Ker(F s;v
Corollary 1 gives the range of communication-free statement-iteration hyperplane coefficient
vectors. It can be used for the finding of communication-free statement-iteration hyperplane co-efficient
vectors. On the other hand, the range of communication-free data hyperplane coefficient
vectors is also given as follows.
As mentioned before, the relations between statement-iteration spaces and data spaces can be
established via array references. Moreover, the statement-iteration hyperplane coefficient vectors
and data hyperplane coefficient vectors are related. The following lemma expresses the relation
between these two hyperplane coefficient vectors. A similar result is given in [7].
Lemma 2 For any statement s 2 S and its referenced array v 2 D, Ref s;v is the array reference
function from SIS(s) into DS(v). \Psi h are
communication-free hyperplane partitions if and only if
Proof. ()): Suppose that \Psi h are
communication-free hyperplane partitionings. Let I 0
s and I 00
s be two distinct statement-iterations
and belong to the same statement-iteration hyperplane, \Psi h (s). If D 0
v and D 00
are two data indices
such that Ref s;v (I 0
v and Ref s;v (I 00
v , from the above assumptions, D 0
v and D 00
should
belong to the same data hyperplane, \Phi g (v).
Because I 0
s and I 00
s belong to the same statement-iteration hyperplane, \Psi h (s), then, \Delta \Delta I 0
and \Delta \Delta I 00
Therefore,
s
1 Note that \Delta is a row vector. However, it is \Delta t , but not \Delta, that is orthogonal to \Psi h(s).
On the other hand, since D 0
v and D 00
v belong to the same data hyperplane, \Phi g (v), that means
\Theta \Delta D 0
and \Theta \Delta D 00
. Thus,
\Theta \Delta D 0
Since I 0
s and I 00
s are any two statement-iterations on the statement-iteration hyperplane \Psi h (s),
s ) is a vector on the statement-iteration hyperplane. Furthermore, both \Delta \Delta
and (\Theta \Delta F s;v )
hence we can conclude that \Delta and \Theta \Delta F s;v are linearly dependent. It
implies
are hyperplane partitions
for SIS(s) and DS(v) respectively and
\Phi g (v) are communication-free partitioning. According to Definition 1, what we have to do is to
prove (v).
Let I s be any statement-iteration on statement-iteration hyperplane \Psi h (s). Then \Delta
From the assumption that
(ff\Theta
Let c
(v). We have shown that 8I s 2 \Psi h (s); Ref s;v
\Phi g (v). It then follows that \Psi h (s) and \Phi g (v) are communication-free partitioning. 2
By Lemma 2, the statement-iteration hyperplane coefficient vector \Delta can be decided if the data
hyperplane coefficient vector \Theta has been determined. If F s;v is invertible, the statement-iteration
hyperplane coefficient vectors can be decided first, then the data hyperplane coefficient vectors
can be derived by
The range of communication-free data
hyperplane coefficient vectors can be derived from this lemma. Corollary 1 shows the range of
statement-iteration hyperplane coefficient vectors. The next corollary provides the ranges of data
hyperplane coefficient vectors.
Corollary 2 For any communication-free data hyperplane \Phi g, the following
condition must hold:
denotes the complement set of S.
Proof. This paper considers the nontrivial hyperplane partitioning, which requires \Delta be a nonzero
vector. By Lemma 2, Therefore, \Theta \Delta F s;v is not equal to 0. It implies that
\Theta t 62 Ker((F s;v ) t ). The condition should be true for all s; s 2 S. Hence, \Theta t 62 ([ s2S Ker((F s;v ) t )).
It follows that \Theta t belongs to the complement of ([ s2S Ker((F s;v Consider the following loop.
do
do
The nested loop is communication-free if and only if the statement-iteration hyperplane coefficient
vectors for s 1 and s 2 and data hyperplane coefficient vectors for v 1 and v 2 are
respectively, where f0g. We
show that \Delta 1 and \Delta 2 satisfy the Corollary 1 as follows.
The test of Corollary 2 for \Theta 1 and \Theta 2 is as below.
=) \Theta t
section describes the communication-free hyperplane partitioning technique. The necessary
and sufficient conditions of communication-free hyperplane partitioning for a single perfectly
nested loop will be presented.
4 Communication-Free Hyperplane Partitioning for a Perfectly
Nested Loop
Each data array has a corresponding data space. However, a nested loop with multiple statements
may have multiple statement-iteration spaces. In this section, we will consider additional conditions
of multiple statement-iteration spaces for communication-free hyperplane partitioning. These
conditions are also used in determining statement-iteration hyperplanes and data hyperplanes.
. The number of
occurrences of array variable v j in statement s i is r i;j , where r i;j
does not reference v j , r i;j is set to 0. The previous representation of array
reference function can be modified slightly to describe the array reference of statement s i to variable
in the k-th occurrence as Ref s i ;v j
. The related representations will be
changed accordingly, such as Ref s i ;v j
In this section, a partition group that contains a statement-iteration hyperplane for each
statement-iteration space and a data hyperplane for each data space is considered. Suppose that
the data hyperplane in data space DS(v j ) is \Phi g (v j
g, for all
we have
, \Theta j \Delta
Let
As a result, those statement-iterations that access the data lay on the data hyperplane \Phi g (v j
g will be located on the statement-iteration hyperplane \Psi h (I s i
)g.
To simplify the presentation, we assume all variables v j appear in every statement s i . To satisfy
that each statement-iteration space contains a unique statement-iteration hyperplane, the following
two conditions should be met.
(j
for
(j
for
Condition (i) can infer to the following two equivalent equations.
Condition (ii) deduces the following two equations, and vice
versa.
Eq. (6) can be used to evaluate the data hyperplane constant terms while some constant term
is fixed, say c
. Furthermore, we obtain the following results. For some j, c g j
should be the same
for all i, 1 - i - m. Therefore,
can be further inferred to obtain the following
After describing the conditions for satisfying the communication-free hyperplane partitioning
constraints, we can conclude the following theorem.
Theorem 2 Let be the sets of statements and array
variables, respectively. Ref s i ;v j
k is the array reference function for statement s i accessing array
variables v j at the k-th occurrence in s i , where
g is the statement-iteration hyperplane in SIS(s i ), for
(D v j
g is the data hyperplane in DS(v j ), for
(D v j
are communication-free hyperplane partitions if and only if the following conditions hold.
for some j; k, g.
for some
for some j; k, g.
Theorem 2 can be used to determine whether a nested loop is communication-free. It can also
be used as a procedure of finding a communication-free hyperplane partitioning systematically.
Conditions (C1) to (C4) in Theorem 2 are used for finding the data hyperplane coefficient vectors.
Condition (C5) can check whether the data hyperplane coefficient vectors found in preceding
steps are within the legal range. Following the determination of the data hyperplane coefficient
vectors, the statement-iteration hyperplane coefficient vectors can be obtained by using Condition
(C6). Similarly, Condition (C7) can check whether the statement-iteration hyperplane coefficient
vectors are within the legal range. The data hyperplane constant terms and statement-iteration
hyperplane constant terms can be obtained by using Conditions (C8) and (C9), respectively. If
one of the conditions is violated, the whole procedure will stop and verify that the nested loop has
no communication-free hyperplane partitioning.
On the other hand, combining Equations (3) and (5) together, a sufficient condition of commu-
nication-free hyperplane partitioning can be derived as follows.
r i;j
r i;j
To satisfy the constraint that \Theta is a non-zero row vector,
the following condition should be true.
r i;j
r i;j
Note that this condition is similar to the result in [7] for
loop-level hyperplane partitioning. We conclude the following corollary.
Corollary 3 Suppose are the sets of statements
and array variables, respectively. F s i ;v j
k are the array reference coefficient matrix and
constant vector, respectively, where ng and k 2 g. If
communication-free hyperplane partitioning exists then Eq. must hold. 2
Theorem 1 and Corollary 3 can be used to check the absence of communication-free hyperplane
partitioning for a nested loop, because these conditions are sufficient but not necessary. Theorem 1
is the statement-iteration space dimension test and Corollary 3 is the data space dimension test.
To determine the existence of a communication-free hyperplane partitioning, we need to check the
conditions in Theorem 2. We show the following example to explain the finding of communication-
free hyperplanes of statement-iteration spaces and data spaces.
Example 5: Reconsider loop L1. The set of statements S is fs and the set of array variables
D is fv B. The occurrences of array variables are r
2. From Section 2.1, the array reference coefficient matrices and constant
vectors for statements s 1 and s 2 are listed below, respectively.
\Gamma1#
"1
\Gamma1#
\Gamma2
2.
By Theorem 1, it may exist a communication-free hyperplane partitioning for loop L 1 . Again,
by Corollary 3, the loop is tested for the possible existence of a nontrivial communication-free
hyperplane partitioning. For array variable v 1 , the following inequality is satisfied:
2:
Similarly, with respect to the array variable v 2 , the following inequality is obtained:
2:
Although Eq. (9) holds for all array variables, it still can not ensure that the loop has a nontrivial
communication-free hyperplane partitioning.
Using Theorem 2, we further check the existence of a nontrivial communication-free hyperplane
partitioning. In the mean time, the statement-iteration and data hyperplanes will be derived if
they exist. Recall that the dimensions of data spaces DS(v 1 ) and DS(v 2 ) are two, \Theta 1 and \Theta 2 can
be assumed to be [' respectively. The conditions listed in Theorem 2 will be
checked to determine the hyperplane coefficient vectors and constants.
By Condition (C1) in Theorem 2, the following equations are obtained.
By the Condition (C2) in Theorem 2,
By Condition (C3) in Theorem 2,
By Condition (C4) in Theorem 2,
Substituting [' respectively, the above equations form a
homogeneous linear system. Solving this homogeneous linear system, we obtain the general solution
f0g. Therefore, \Theta
Next, we show \Theta 1 and \Theta 2 satisfy Condition (C5):
=) \Theta t
=) \Theta t
Now the statement-iteration hyperplane coefficient vectors can be determined using Condition
(C6) in Theorem 2.
Note that the statement-iteration hyperplane coefficient vectors may be obtained using many different
can be obtained using \Theta 1
1 . Conditions
(C1) and (C2) in Theorem 2 ensure that all the equations lead to the same result.
For the statement-iteration hyperplane coefficient vectors, Condition (C7) is satisfied:
Next, we determine the data hyperplane constant terms. Due to the hyperplanes are related to
each other, once a hyperplane constant term is determined, the other constant terms will be determined
accordingly. Assuming c g 1
is known, c g 2
, and c h 2
can be determined using Conditions
(C8) and (C9) as below:
Similarly, statement-iteration and data hyperplane constant terms can be evaluated using many
different equations. However, Conditions (C3) and (C4) in Theorem 2 ensure that they all lead
to the same values.
It is clear that there exists at least one set of nonzero statement-iteration and data hyperplane
coefficient vectors such that the conditions listed in Theorem 2 are all satisfied. By Theorem 2, this
fact implies that the nested loop has a nontrivial communication-free hyperplane partitioning. The
partition group is defined as the set of statement-iteration and data hyperplanes that are allocated
to a processor. The partition group for this example follows.
(D
(D
(D
(D
Given loop bounds 1, the constant term c g 1
corresponding to
statement-iteration hyperplane coefficient vector \Delta 1 and \Delta 2 are ranged from \Gamma5 to 3 and from 0 to
respectively. The intersection part of these two ranges means that the two statement-iteration
hyperplanes have to be coupled together onto a processor. For the rest, just one statement-iteration
hyperplane, either \Delta 1 or \Delta 2 , is allocated to a processor. The constant terms c g 2
, and c h 2
are
evaluated to the following values:
The corresponding parallelized program is as follows.
doall
do
do
enddoall
Fig. 3 illustrates the communication-free hyperplane partitionings for a particular partition
2. 2
The communication-free hyperplane partitioning technique for a perfectly nested loop has been
discussed in this section. Our method treats statements within a loop body as separate schedulable
units and considers both iteration and data spaces at the same time. Partitioning groups are
determined using affine array reference functions directly, instead of using data dependence vectors.
5 Communication-Free Hyperplane Partitioning for Sequences of
Imperfectly Nested Loops
The conditions presented in Section 4 for communication-free hyperplane partitioning can be applicable
to the general case for sequences of imperfectly nested loops. In a perfectly nested loop, all
Figure
3: Communication-free statement-iteration hyperplanes and data hyperplanes for a partition
group of loop (L 1 ), where
2. (a) Statement-iteration hyperplane of SIS(s 1 ). (b)
Statement-iteration hyperplane of SIS(s 2 ). (c) Data hyperplane of DS(A). (d) Data hyperplane
of DS(B).
statements are enclosed in the same depth of the nested loop, i.e., the statement-iteration space of
each statement has the same dimensionality. The statement-iteration spaces of two statements in
imperfectly nested loops may have different dimension. Since each statement-iteration is a schedulable
unit and the partitioning technique is independent to the dimensionality of statement-iteration
spaces, Theorem 2 can be directly applied to sequences of imperfectly nested loops. The following
example is to demonstrate the technique in applying to sequences of imperfectly nested loops.
Example Consider the following sequences of nested loops L 2 .
do
do
do
do
do
do
The set of statements S is fs g. The set of array variables is
respectively. The values of r 11 , r 12 , r 13 , r
r 43 all are 1. We use Theorem 1 and Corollary 3 to verify whether (L2)
has no communication-free hyperplane partitioning. Since dim(
which is
smaller than dim(SIS(s i )), for Theorem 1 is helpless for ensuring that (L2) exists no
communication-free hyperplane partitioning. Corollary 3 is useless here because all the values of
are 1, for Further examinations are necessary, because Theorem 1 and
Corollary 3 can not prove that (L2) has no communication-free hyperplane partitioning, From Theorem
2, if a communication-free hyperplane partitioning exists, the conditions listed in Theorem 2
should be satisfied; otherwise, (L2) exists no communication-free hyperplane partitioning.
Due to the dimensions of the data spaces DS(v 1
tively, without loss of generality, the data hyperplane coefficient vectors can be respectively assumed
to be \Theta In what follows, the requirements to
satisfy the feasibility of communication-free hyperplane partitioning are examined one-by-one.
There is no need to examine the Conditions (C1) and (C3) because all the values of r ij are 1.
By Condition (C2), we obtain
By Condition (C4), we obtain
Solving the above linear system, the general solutions are (' 11 , ' 12 , ' 21 , ' 22 , ' 31 , '
2t, \Gammat, t, t, f0g. Therefore, \Theta
The verification of Condition (C5) is as follows:
=) \Theta t
=) \Theta t
=) \Theta t
All the data hyperplane coefficient vectors are within the legal range.
The statement-iteration hyperplane coefficient vectors can be determined by Condition (C6)
as follows.
The legality of these statement-iteration hyperplane coefficient vectors is then checked by Condition
(C7) as follows:
From the above observation, all the statement-iteration and data hyperplane coefficient vectors
are legal. This fact reveals that the nested loops has communication-free hyperplane partitionings.
Next, the data and statement-iteration hyperplanes constant terms are decided.
First, let one data hyperplane constant term be fixed, say c
. The rest of data hyperplane
constant terms can be determined by Condition (C8).
Similarly, the statement-iteration hyperplane constant terms can be determined by Condition (C9)
after data hyperplane constant terms have been decided.
The corresponding partition group is as follows.
(D
(D
(D v 3
(D
(D
(D v 3
Fig. 4 illustrates the communication-free hyperplane partitionings for a partition group, where
and c g 1
0. The corresponding parallelized program is as follows.
doall
do
endif
do
do
do
do
enddoall
6 Conclusions
This paper presents the techniques for finding statement-level communication-free hyperplane partitioning
for a perfectly nested loop and sequences of imperfectly nested loops. The necessary
and sufficient conditions for the feasibility of communication-free partitioning along hyperplane are
proposed. The techniques can be applied to loops with affine array references and do not use any
information of data dependence distances or direction vectors.
Although our goal is to determine communication-free partitioning for loops, in reality, most
loops are not communication-free. If a program is not communication-free, the technique can be
Figure
4: Communication-free statement-iteration hyperplanes and data hyperplanes for a partition
group of loop (L 2 ), where
Statement-iteration hyperplane of SIS(s 1 ). (b)
Statement-iteration hyperplane of SIS(s 2 ). (c) Statement-iteration hyperplane of SIS(s 3 ). (d)
Statement-iteration hyperplane of SIS(s 4 ). (e) Data hyperplane of DS(A). (f) Data hyperplane
of DS(B). (g) Data hyperplane of DS(C).
used to identify subsets of statement-iteration and data spaces which are communication-free. For
other statement-iterations, it is necessary to generate communication code. Two important tasks
in our future work are to develop heuristics for searching a subset of statement-iterations which is
communication-free and to generate efficient code when communication is inevitable.
--R
"Global optimizations for parallelism and locality on scalable parallel machines,"
"Communication-free data allocation techniques for parallelizing compilers on multicomputers,"
"Demonstration of automatic data partitioning techniques for parallelizing compilers on multicomputers,"
"Compiling Fortran D for MIMD distributed-memory machines,"
"Evaluating compiler optimizations for Fortran D,"
Englewood Cliffs
"Communication-free hyperplane partitioning of nested loops,"
"Programming for parallelism,"
Compiling Programs for Nonshared Memory Machines.
"Compiling global name-space parallel loops for distributed ex- ecution,"
"Index domain alignment: Minimizing cost of cross-referencing between distributed arrays,"
"The data alignment phase in compiling programs for distributed-memory machines,"
"Communication-free parallelization via affine transformations,"
Memory Storage Patterns in Parallel Processing.
"Compile-time techniques for data distribution in distributed memory machines,"
"Process decomposition through locality of reference,"
"The dino parallel programming language,"
An Optimizing Fortran D Compiler for MIMD distributed-Memory Machines
"A loop transformation theory and an algorithm to maximize parallelism,"
High Performance Compilers for Parallel Computing.
"SUPERB and Vienna Fortran,"
--TR
--CTR
Weng-Long Chang , Chih-Ping Chu , Jia-Hwa Wu, Communication-Free Alignment for Array References with Linear Subscripts in Three Loop Index Variables or Quadratic Subscripts, The Journal of Supercomputing, v.20 n.1, p.67-83, August 2001
Skewed Data Partition and Alignment Techniques for Compiling Programs on Distributed Memory Multicomputers, The Journal of Supercomputing, v.21 n.2, p.191-211, February 2002
Weng-Long Chang , Jih-Woei Huang , Chih-Ping Chu, Using Elementary Linear Algebra to Solve Data Alignment for Arrays with Linear or Quadratic References, IEEE Transactions on Parallel and Distributed Systems, v.15 n.1, p.28-39, January 2004 | hyperplane partition;parallelizing compilers;communication-free;distributed-memory multicomputers;data communication |
343453 | A Low Overhead Logging Scheme for Fast Recovery in Distributed Shared Memory Systems. | This paper presents an efficient, writer-based logging scheme for recoverable distributed shared memory systems, in which logging of a data item is performed by its writer process, instead of every process that accesses the item logging it. Since the writer process maintains the log of data items, volatile storage can be used for logging. Only the readers' access information needs to be logged into the stable storage of the writer process to tolerate multiple failures. Moreover, to reduce the frequency of stable logging, only the data items accessed by multiple processes are logged with their access information when the items are invalidated, and also semantic-based optimization in logging is considered. Compared with the earlier schemes in which stable logging was performed whenever a new data item was accessed or written by a process, the size of the log and the logging frequency can be significantly reduced in the proposed scheme. | Introduction
Distributed shared memory(DSM) systems[15] transform an existing network of workstations to a powerful
shared-memory parallel computer which could deliver superior price/performance ratio. However,
with more workstations engaged in the system and longer execution time, the probability of failures in-
creases, which could render the system useless. For the DSM system to be of any practical use, it is
important for the system to be recoverable so that the processes do not have to restart from the beginning
when there is a failure [25]. An approach to provide fault-tolerance to the DSM systems is to use the
checkpointing and rollback-recovery. Checkpointing is an operation to save intermediate system states
into the stable storage which is not affected by the system failures. With the periodic checkpointing, the
system can recover to one of the saved states, called a checkpoint, when a failure occurs in the system.
The activity to resume the computation from one of the previous checkpoints is called rollback.
In DSM systems, the computational state of a process becomes dependent on the state of another
process by reading a data item produced by that process. Because of such dependency relations, a process
recovering from a failure has to force its dependent processes to roll back together, if it cannot reproduce
the same sequence of data items. While the rollback is being propagated to the dependent processes, the
processes may have to roll back recursively to reach a consistent recovery line, if the checkpoints for
those processes are not taken carefully. Such recursive rollback is called the domino effect[17], and in
the worst case, the consistent recovery line consists of a set of the initial points; i.e., the total loss of the
computation in spite of the checkpointing efforts.
One solution to cope with the domino effect is the coordinated checkpointing, in which each time
when a process takes a checkpoint, it coordinates the related processes to take consistent checkpoints
together [3, 4, 5, 8, 10, 13]. Since each checkpointing coordination under this approach produces a consistent
recovery line, the processes cannot be involved in the domino effect. One possible drawback of
this approach is that the processes need to be blocked from their normal computation during the check-pointing
coordination. The communication-induced checkpointing is another form of the coordinated
checkpointing, in which a process takes a checkpoint whenever it notices a new dependency relation
created from another process[9, 22, 24, 25]. This checkpointing coordination approach also ensures no
domino-effect since there is a checkpoint for each communication point. However, the overhead caused
by too frequent checkpointing may severely degrade the system performance.
Another solution to the domino effect problem is to use the message logging in addition to the
independent checkpointing [19]. If every data item accessed by a process is logged into the stable storage,
the process can regenerate the same computation after a rollback by reprocessing the logged data items.
As a result, the failure of one process does not affect other processes, which means that there is no
rollback propagation and also no domino effect. The only possible drawback of this approach is the
nonnegligible logging overhead.
To reduce the logging overhead, the scheme proposed in [23] avoids the repeated logging of the same
data item accessed repeatedly. For the correct recomputation, each data item is logged once when it is
first accessed, and the count of repeated access is logged for the item, when the data item is invalidated.
As a result, the amount of the log can be reduced compared to the scheme in [19]. The scheme proposed
in [11] suggests that a data item should be logged when it is produced by a write operation. Hence, a data
item accessed by multiple processes need not be logged at multiple sites and the amount of the log can
be reduced. However, for a data item written but accessed by no other processes, the logging becomes
useless. Moreover, for the correct recomputation, a process accessing a data item has to log the location
where the item is logged and the access count of the item. As a result, there cannot be much reduction in
the frequency of the logging compared to the scheme in [23].
To further reduce the logging overhead, the scheme proposed in [7] suggests the volatile logging.
When a process produces a new data item by a write operation, the value is logged into the volatile
storage of the writer process. When the written value is requested by other processes, the writer process
logs the operation number of the requesting process. Hence, when the requesting process fails, the data
value and the proper operation number can be retrieved from the writer process. Compared with the
overhead of logging into the stable storage, volatile logging can cause much less overhead. However,
when there are concurrent failures at the requesting process and the writer process, the system cannot be
fully recovered.
In this paper, we present a new logging scheme for a recoverable DSM system, which tolerates
multiple failures. In the proposed scheme, two-level log structure is used in which both of the volatile
and the stable storages are utilized for efficient logging. To speed up the logging and the recovery
procedures, a data item and its readers' access information are logged into the volatile storage of the
writer process. And, to tolerate multiple failures, only the log of access information for the data items
are saved into the stable storage. For volatile logging, the limited space can be one possible problem
and for stable logging, the access frequency of the stable storage can be the critical issue. To solve these
problems, logging of a data item is performed only when the data becomes invalidated by a new write
operation, and the writer process takes the whole responsibility for logging, instead that every process
accessing the data concurrently logs it. Also, to eliminate unnecessary logging of data items, semantic-based
optimization is considered for logging. As a result, the amount of the log and the frequency of
stable storage accesses can substantially be reduced.
The rest of this paper is organized as follows: Section 2 presents the DSM system model and the
definition of the consistent recovery line is presented in Section 3. In Section 4 and Section 5, proposed
logging and rollback recovery protocols are presented, respectively, and Section 6 proves the correctness
of proposed protocols. To evaluate the performance of the proposed scheme, we have implemented the
proposed logging scheme on top of CVM(Coherent Virtual Machine)[12]. The experimental results are
discussed in Section 7, and Section 8 concludes the paper.
2 The System Model
A DSM system considered in this paper consists of a number of nodes connected through a communication
network. Each node consists of a processor, a volatile main memory and a non-volatile secondary
storage. The processors in the system do not share any physical memory or global clock, and they communicate
by message passing. However, the system provides a shared memory space and the unit of the
shared data is a fixed-size page.
The system can logically be viewed as a set of processes running on the nodes and communicating
by accessing a shared data page. Each of the processes can be considered as a sequence of state transitions
from the initial state to the final state. An event is an atomic action that causes a state transition
within a process, and a sequence of events is called a computation. In a DSM system, the computation
of a process can be characterized as a sequence of read/write operations to access the shared data pages.
The computation performed by each process is assumed to be piece-wise deterministic; that is, the computational
states generated by a process is fully determined by a sequence of data pages provided for a
sequence of read operations.
For the DSM model, we assume the read-replication model [21], in which the system maintains a
single writable copy or multiple read-only copies for each data page. The memory consistency model we
assume is the sequential consistency model, in which the version of a data page a process reads should
Reader
Page X
Copy Of
Request
(2)
(2)
(2)
Writer
Ownership
Owner
Page X&
Invalidate
Copy-Set Of X
Owner
(a) Remote Read Operation (b) Remote Write Operation
(2)
(1)
Read
Read-Only
Request
(1)
Write
Figure
1: Remote Read/Write Procedures
be the latest version that was written for that data page [14]. A number of different memory semantics
for the DSM systems have been proposed including processor, weak, and release consistency [16], as
well as causal coherence [1]. However, in this paper, we focus on the sequential consistency model, and
the write-invalidation protocol [15] is assumed to implement the sequential consistency.
Figure
1 depicts the read and the write procedures under the write-invalidation protocol. For each
data page, there is one owner process which has the writable copy in its local memory. When a process
reads a data page which is not in the local memory, it has to ask for the transfer of a read-only copy
from the owner. A set of processes having the read-only copies of a data page is called a copy-set of
the page. For a process to perform a write operation on a data page, it has to be the owner of the page
and the copy-set of the page must be empty. Hence, the writer process first sends the write request to
the owner process, if it is not the owner. The owner process then sends the invalidation message to the
processes in the copy-set to make them invalidate the read-only copies of the page. After collecting the
invalidation acknowledgements from the processes in the copy-set, the owner transfers the data page with
the ownership to the new writer process. If the writer process is the owner but the copy-set is not empty,
then it performs the invalidation procedure before overwriting the page.
For each system component, we make the following failure assumptions: The processors are fail-stop
[20]. When a processor fails, it simply stops and does not perform any malicious actions. The failures
considered are transient and independent. When a node recovers from a failure and re-executes the com-
putation, the same failure is not likely to occur again. Also, the failure of one node does not affect other
nodes. We do not make any assumption on the number of simultaneous node failures. When a node fails,
the register contents and the main memory contents are lost. However, the contents of the secondary
storage are preserved and the secondary storage is used as a stable storage. The communication subsystem
is reliable; that is, the message delivery can be handled in an error-free and virtually lossless manner
by the underlying communication subsystem. However, no assumption is made on the message delivery
order.
3 The Consistent Recovery Line
A state of a process is naturally dependent on its previous states. In the DSM system, the dependency
relation between the states of different processes can also be created by reading and writing the same
data item. If a process p i reads a data item written by another process p j , then p i 's states after the read
event become dependent on p j 's states before the write event. More formally, the dependency relation
can be defined as follows: Let R ff
i denote the ff-th read event happened at process p i and I ff
i denote the
state interval triggered by R ff
i and ended right before R ff+1
i denotes the p i 's initial
state. Let W ff
i denote the set of write events happened in I ff
the read (or the write) event on a data item x with the returning (or written) value u.
Definition 1: An interval I ff
i is said to be dependent on another interval I fi
if one of the following
conditions is satisfied, and such a dependency relation is denoted by I fi
C2. R ff
j and there is no other W
and
C3. There exists an interval I fl
, such that, I fi
and I fl
Figure
2 shows an example of the computational dependency among the state intervals for a DSM
system consisting of three processes . The horizontal arrow in Figure 2(a) represents the
progress of the computation at each process and the arrow from one process to another represents the data
page transfer between the processes. A data page X (or Y ) containing the data item x (or y) is denoted
by X(x) (or Y (y)). Figure 2(b) depicts the dependency relation created in Figure 2(a) as a directed
graph, in which each node represents a state interval and an edge (or a path) from a node n ff to another
I
jI jI
kI
kI
(a) Computation Diagram (b) Dependency Graph
Figure
2: An Example of Dependency Relations
node n fi indicates a direct (or a transitive) dependency relation from a state interval n ff to another state
interval n fi .
Note that in Figure 2(a), there is no dependency relation from I 1
j to I 1
k according to the definition
given before. However, in the DSM system, it is not easy to recognize which part of a data page has been
accessed by a process. Hence, the computation in Figure 2(a) may not be differentiated from the one in
which p k 's read operation is R(y 0 ). In such a case, there must be the dependency relation, I 1
k . The
dotted arrow in Figure 2(b) denotes such possible dependency relation and the logging scheme must be
carefully designed to take care of such possible dependency for the consistent recovery.
The dependency relations between the state intervals may cause possible inconsistency problems
when a process rolls back and performs the recomputation. Figure 3 shows two typical examples of inconsistent
rollback recovery cases, discussed in message-passing based distributed computing systems[6].
First, suppose the process p i in Figure 3(a) should roll back to its latest checkpoint C i due to a failure
but it cannot retrieve the same data item for R(y). Then, the result of W (x) may be different from the
one computed before the failure and hence, the consistency between p i and p j becomes violated since
computation after the event R(x) depends on the invalidated computation. Such a case is called an
orphan message case.
On the other hands, suppose the process p j in Figure 3(b) should roll back to its latest checkpoint C j
due to a failure. For p j to regenerate the exactly same computation, it has to retrieve the same data item x
from does not roll back to resend the data page X(x). Such a case is called a lost message case.
However, in the DSM system, the lost message case itself does not cause any inconsistency problem. If
Failure
Failure
(a) Orphan Message Case (b) Lost Message Case
Figure
3: Possible Inconsistent Recovery Lines
there has been no other write operation since W (x) of p i , then p j can retrieve the same contents of
the page from the current owner, at any time. Even though there has been another write operation and
the contents of the page has been changed, p j can still retrieve the data page X(x) (even with different
contents) and the different recomputation of p j does not affect other processes unless p j has had any
dependent processes before the failure.
Hence, in the DSM system, the only rollback recovery case which causes an inconsistency problem
is the orphan message case.
Definition 2: A process is said to recover to a consistent recovery line, if and only if it is not involved in
any orphan message case after the rollback recovery.
4 The Logging Protocol
For efficient logging, three principles are adopted. One is the writer-based logging. Instead of multiple
readers logging the same data page, one writer process takes the responsibility for logging of the page.
Also, invalidation-triggered logging is used, in which logging of a data page is delayed until the page
is invalidated. Finally, semantic-based logging optimization is considered. To avoid the unnecessary
logging activities, the access pattern of the data by related processes is considered into the logging
strategy.
4.1 Writer-Based, Invalidation-Triggered Logging
For consistent regeneration of the computation, a process is required to log the sequence of data pages it
has accessed. If the same contents of a data page have been accessed more than once, the process should
log the page once and log its access duration, instead of logging the same page contents, repeatedly.
The access duration is denoted by the first and the last computational points at which the page has
been accessed. The logging of a data page can be performed either at the process which accessed it (the
reader) or at the process which produced it (the writer). Since a data page produced by a writer is usually
accessed by multiple readers, it is more efficient for one writer to log the page rather than multiple readers
log the same page. Moreover, the writer can utilize the volatile storage for the logging of the data page,
since the logged pages should be required for the reader's failure, not for the writer's own failure. Even
if the writer loses the page log due to its own failure, it can regenerate the same contents of the page,
under the consistent recovery assumption.
To uniquely identify each version of data pages and its access duration, each process p i in the system
maintains the following data structures in its local memory:
assigned to process p i .
variable that counts the number of read and write operations performed by process
Using opnum i , a unique sequence number is assigned to each of read and write operations
performed by p i .
For each version of data page X produced by p i , a unique version identifier is assigned.
ffl version x : A unique identifier assigned to each version of data page X . version
where opnum i is the opnum value at the time when p i produced the current version of X .
When produces a new version of X by a write operation, version x is assigned to the page. When
the current version of X is invalidated, p i logs the current version of X with its version x into p i 's volatile
log space, and it also has to log the access duration for the current readers of page X . To report the access
duration of a page, each reader p j maintains the following data structure associated with page X , in its
local memory.
ffl duration jx : A record variable with four fields, which denote the access information of page X at p j .
first : The value of opnum j at the time when page X is first accessed at p j .
last : The value of opnum j at the time when page X is invalidated at p j .
When the new version of page X is transferred from the current owner, p j creates duration jx and
fills out the entries pid, version, and first. The entry last is completed when p j receives an invalidation
message for X from the current owner, p i . Process p j then piggybacks the complete duration jx into
its invalidation acknowledgement sent to p i . The owner p i , after collecting the duration kx from every
reader logs the collected access information into its volatile log space. The owner p i may also have
duration ix , if it has read the page X after writing on it. Another process which implicitly accesses the
current version of X is the next owner. Since the next owner usually makes partial updates on the current
version of the page, the current version has to be retrieved in case of the next owner's failure. Hence,
when a process p k sends a write request to the current owner p i , it should attach its opnum k value, and
on the receipt of the request, creates duration kx , in which first = last
We here have to notice that the volatile logging of access information by the writer provides fast
retrieval in case of a reader's failure. However, the information can totally be lost in case of the writer's
failure since unlike the data page contents, the access information cannot be reconstructed after the
failure. Hence, to cope with the concurrent failures which might occur at the writer and the
readers, stable logging of the access information is required. When the writer p i makes the volatile
log of access information, it should also save the same information into its stable log space, so that the
readers' access information can be reconstructed after the writer fails.
Figure
4 shows the way how the writer-based, invalidation-triggered logging protocol is executed
incorporated with the sequential consistency protocol, for the system consisting of three processes
. The symbol R ff (X) (or W ff (X)) in the figure denotes the read (or the write) operation to
data page X with the opnum value ff, and INV (X) denotes the invalidation of page X . In the figure,
it is assumed that the data page X is initially owned by process p j . As it is noticed from the figure, the
proposed logging scheme requires a small amount of extra information piggybacked on the write request
message and invalidation acknowledgements. And, the volatile and the stable logging is performed only
by the writer process and only at the invalidation time. Figure 4 also shows the contents of volatile and
stable log storages at process p j . Note that the stable log of p j includes only the access information,
while the volatile log includes the contents of page X in addition to the access information.
By delaying the page logging until the invalidation time, the readers' access information can be collected
without any extra communication. Moreover, the logging of access duration for multiple readers
can be performed with one stable storage access. Though the amount of access information is small,
Read
Request
Read
Request
Request
Write
Invalidate
Invalidate
Invalidate
Invalidate
Ownership
(1)
(i, j:1, 1, 1)
Read-Only
Copy of
Copy of
Read-Only
Figure
4: An Example of Writer-Based, Invalidation-Triggered Logging
frequent accesses to the stable storage may severely degrade the system performance. Hence, it is
very important to reduce the logging frequency with the invalidation-triggered logging. However, the
invalidation-triggered logging may cause some data pages accessed by readers but not yet invalidated to
have no log entries. For those pages, a reader process cannot retrieve the log entries, when it re-executes
the computation due to a failure. Such a data page, however, can be safely re-fetched from the current
owner even after the reader's failure, since a data page accessed by multiple readers cannot be invalidated
unless every reader sends the invalidation acknowledgement back. That is, the data pages currently valid
in the system are not necessary to be logged.
The sequential consistency protocol incorporated with the writer-based, invalidation-triggered logging
is formally presented in Figure 5 and Figure 6, in which the bold faced codes are the ones added for
the logging protocol.
4.2 Semantic-Based Optimization
Every invalidated data page and its access information, however, are not necessary to be logged, considering
the semantics of the data page access. Some data pages accessed can be reproduced during the
recovery and some access duration can implicitly be estimated from other logged access information.
When p i reads a data page X:
If
Send Read-Request(X) to Owner(X);
Wait for Page(X);
If (Not-Exist(duration ix
duration ix .pid=pid
duration ix .version=version x ;
duration ix .first=opnum i +1;
duration ix .last=0;
When receives Read-Request(X) from
Send Page(X) to
Figure
5: Writer-Based, Invalidation-Triggered Logging Protocol
In the semantic-based logging strategy, some unnecessary logging points are detected based on the data
page access pattern, and the logging at such points are avoided or delayed. This logging strategy can further
reduce the frequency of the stable logging activity and also reduce the amount of data pages logged
in the volatile storage.
First of all, the data pages with no remote access need not to be logged. A data page with no remote
access means that the page is read and invalidated locally, without creating any dependency relation. For
example, in Figure 7, process p i first fetches the data page X from p j and creates a new version of X
with an identifier (i:1). This version of the page is locally read for R 2 (X) and R 3 (X), and invalidated
for W 4 (X). However, when the version (i:1) of X is invalidated due to the operation W 4 (X), p i need not
log the contents of page X and the access duration (i,i:1,2,4). The reason is that during the recovery of p i ,
the version (i:1) of X can be regenerated by the operation W 1 (X) and the access duration (i,i:1,2,4) can
be estimated as the duration between W 1 (X) and W 4 (X). The next version (i:4) of page X , however,
need to be logged when it is invalidated due to the operation W 2 (X) of p j , since the operation W 2 (X)
When p i writes on a data page X:
If
Send (Write-Request(X) and opnum i ) to Owner(X);
Wait for Page(X);
Else If (Copy-Set(X) 6= OE) f
Send Invalidation(X) to Every p k 2 Copy-Set(X);
Wait for Invalidation-ACK(X) from Every p k 2 Copy-Set(X);
duration x =[ k2Copy\GammaSet(X) duration
Save (version x , Page(X), duration x ) into Volatile-Log;
Flush (version x , duration x ) into Stable-Log;
Write Page(X);
version x =pid i :opnum
When receives (Write-Request(X) and opnum j ) from
Send Invalidation(X) to Every p k 2 Copy-Set(X);
Wait for Invalidation-ACK(X) from Every p k 2 Copy-Set(X);
duration x =[ k2Copy\GammaSet(X) duration
duration jx .pid=pid
duration jx .version=version x ;
duration jx .first=duration jx .last=opnum j +1;
duration x =duration x [ duration jx ;
Save (version x , Page(X), duration x ) into Volatile-Log;
Flush (version x , duration x ) into Stable-Log;
Send Page(X) and Ownership(X) to
When
duration ix .last=opnum
Send (Invalidation-ACK(X) and duration ix ) to Owner(X);
Figure
Invalidation-Triggered Logging Protocol (Continued)
Figure
7: An Example of Local Data Accesses
of implicitly requires the remote access of the version (i:4).
By eliminating the logging of local data pages, the amount of logged data pages in the volatile log
space and also the access frequency to the stable log space can significantly be reduced. However, such
elimination may cause some inconsistency problems as shown in Figure 8, if it is integrated with the
invalidation-triggered logging. Suppose that process p i in the figure should roll back after its failure. For
the consistent recovery, p i has to perform the recomputation up to W 4 (X). Otherwise, an orphan message
case happens between p i and p j . However, p i performed its last logging operation before W 2 (X) and
there is no log entry up to W 4 (X). If p i has no dependency with p j , then it does not matter whether p i
rolls back to W 2 (X) or to W 4 (X). However, due to the dependency with p j , process p i has to perform
the recomputation at least up to the point at which the dependency has been formed.
To record the opnum value up to which a process has to recover, each process p i in the system maintains
an n-integer array, called an operation counter vector(OCV ), where n is the number of processes
in the system (OCV [n])). The i-th entry, V i [i], denotes the current opnum
value of denotes the last opnum value of p j on which p i 's current computation is
dependent. This notation is similar to the causal vector proposed in [18]. Hence, when a process p j
transfers a data page to another process p i , it sends its current OCV j value with the page. The receiver
updates its OCV i by taking the entry-wise maximum value of the received vector and its own
vector, as follows:
For example, in Figure 8, when p i sends the data page X and its version identifier (i;4) to p j , it sends
its with the page and then p j updates its OCV j as (4; 2; 0). When p j sends the data
page Y and its version identifier (j:3) to p k , OCV sent with the page, and OCV k is
Failure
Logging
Figure
8: An Example of Operation Counter Vectors
updated as OCV 1). As a result, each V i [j] in OCV i indicates the last operation of process p j
on which process p i 's current computation is directly or transitively dependent. Hence, when p j performs
a rollback recovery, it has to complete the recomputation at least up to the point V i [j] to yield consistent
states between p i and p j .
Another data access pattern to be considered for the logging optimization is a sequence of write
operations performed on a data page, as shown in Figure 9. Processes l , in the figure,
sequentially write on a data page X , however, the written data is read only by R 2 (X) of p l . This access
pattern means that the only explicit dependency relation which occurred in the system is W
l . Even though there is no explicit dependency between any of the write operations shown
in the figure, the write precedence order between those operations is very important, since the order
indicates the possible dependency relation explained in Section 3 and it also indicates which process
should become the current owner of the page after the recovery. To reduce the frequency of stable
logging without violating the write precedence order, we suggest the delayed stable logging of some
precedence orders.
In the delayed stable logging, the volatile logging of a data page and its access duration is performed
as described before, however, the stable logging is not performed when a data page having no copy-set
is invalidated. Instead, the information regarding the precedence order between the current owner of
the page and its next owner is attached into the data page transferred to the next owner. Since the new
owner maintains the unlogged precedence order information, the correct recomputation of its precedent
can be performed as long as the new owner survives. Now, suppose that the new owner and its precedent
fail concurrently. If the new owner fails without making any new dependent after the write, arbitrary
recomputation may not cause any inconsistency problem between the new owner and its precedent.
Pl
Figure
9: An Example of Write Precedence Order
However, if it fails making new dependents after the write, the correct recovery may not be possible.
Hence, a process maintaining the unlogged precedence order information should perform the stable
logging before it creates any dependent process.
For example, in Figure 9, p i does not perform stable logging when it invalidates page X . Instead,
maintains the precedence information, such as (i:1) ! (j:1), and performs stable logging when it
transfers page X to p k . At this time, the precedence order between p j and p k , (j:1) ! (k:1), can also
be stably logged together. Hence, the page X transferred from p j to p k need not carry the precedence
relation between p j and p k . As a result, the computation shown in Figure 9 requires at most two stable
logging activities, instead of four stable logging activities.
5 The Recovery Protocol
For the consistent recovery, two log structures are used. The volatile log is mainly used for the recovering
process to perform the consistent recomputation, and the stable log is used to reconstruct the volatile log
to tolerate multiple failures. In addition to the data logging, independent checkpointing is periodically
performed by each process to reduce the recomputation time.
5.1 Checkpointing and Garbage Collection
To reduce the amount of recomputation in case of a failure, each process in the system periodically takes
a checkpoint. A checkpoint of a process p i includes the intermediate state of the process, the current
value of opnum i and OCV i , and the data pages which p i currently maintains. When a process takes a
new checkpoint, it can safely discard its previous checkpoint. The checkpointing activities among the
related processes need not be performed in a coordinated manner.
A process, however, has to be careful in discarding the stable log contents saved before the new
checkpoint, since any of those log entries may still be requested by other dependent processes. Hence,
for each checkpoint, C ff , of a process p i , p i maintains a logging vector, say LV i;ff . The j th entry of
the vector, denoted by LV i;ff [j], indicates the largest opnum j value in duration jx logged before the
corresponding checkpoint. When a process p j takes a new checkpoint and the recomputation before
that checkpoint is no longer required, it sends its current opnum j value to the other processes. Each
process periodically compares the received opnum j value with the LV i;ff [j] value of each checkpoint
C ff . When for every p j in the system, the received opnum j becomes larger than LV i;ff [j], process p i can
safely discard the log information saved before the checkpoint, C ff .
5.2 Rollback-Recovery
The recovery of a single failure case is first discussed. For a process p i to be recovered from a failure,
a recovery process for
i , is first created and it sets p i 's status as recovering. Process p 0
broadcasts the log collection message to all the other processes in the system. On the receipt of the
log collection message, each process p j replies with the i-th entry of its OCV j , V j [i]. Also, for any data
page X which is logged at p j and accessed by p i , the logged entry of duration ix and the contents of page
are attached to the reply message. When p 0
collects the reply messages from all the processes in the
system, it creates its recovery log by arranging the received duration ix in the order of duration ix .first
and also arranging the received data pages in the corresponding order. Process p 0
then selects the maximum
value among the collected V j [i] entries, where sets the value as p i 's recovery
point.
Since all the other processes in the system, except p i , are in the normal computational status, p 0
can collect the reply messages from all of them, and the selected recovery point of p i indicates the
last computational state of p i on which any process in the system is dependent. Also, the constructed
recovery log for p i includes every remote data page that p i has accessed before the failure. The recovery
process
then restores the computational state from the last checkpoint of p i and from the restored state,
process begins the recomputation. The restored state includes the same set of active data pages which
were residing in the main memory when the checkpoint was taken. The value of opnum i is also set
as the same one of the checkpointing time. During the recomputation, process p i maintains a variable,
Condition Action
and Read(X) or Write Page(X)
duration ix .last
Read Request(X) to Owner(X)
.last Invalidate(X)
Read Request(X) to Owner(X)
recovery log
Table
1: Retrieval of Data Pages during the Recomputation
called which is the value of duration ix .first for the first entry of the recovery log, and Next i
indicates the time to fetch the next data page from the recovery log.
The read and write operations for p i 's recomputation are performed as follows: For each read or
first increments its opnum i value by one, and then compares opnum i with Next i . If
they are matched, the first entry of the recovery log including the contents of the corresponding page and
its duration ix is moved to the active data page space. Then, the operation is performed on the new page
and any previous version of the page is now removed from the active data page space. The new version
of the page is used for the read and write operations until opnum i reaches the value of duration ix .last.
For some read and write operations, data pages created during the recomputation need to be used because
of the logging optimization. Hence, if a new version of a data page X is created by a write operation
and the corresponding log entry is not found in the recovery log, the page must be kept in the active data
page space and the duration ix .last is set as the infinity. This version of page X can be used until the
next write operation on X is performed or a new version of X is retrieved from the recovery log.
Sometimes, when p i reads a data page X , it may face the situation that a valid version of X is not
found in the active data page space and it is not yet the time to fetch the next log entry (opnum
Next i ). This situation occurs for a data page which has been accessed by p i before its failure, but, has
not been invalidated. Note that such a page has not been logged since the current version is still valid.
In this case, the current version of page X must be re-fetched from the current owner. Hence, when p i
reads a data page X , it has to request the page X from the current owner, if it does not have any version
of X in the active data page space, or, the duration ix .last value for page X in the active data page space
is less than opnum i . In both cases, opnum i must be less than Next i . Any previous version of page X
has to be invalidated after receiving a new version. The retrieval and the invalidation activities of data
pages during the recomputation are summarized in Table 1. The active data page space is abbreviated to
ADPS in the table.
During the recomputation, process p i also has to reconstruct the volatile log contents which were
maintained before the failure, for the recovery of other dependent processes. The access information of
volatile log can be retrieved from its stable log contents while p 0
i is waiting for the reply messages
after sending out the log collection requests. However, the data pages which were saved in the volatile
log must be created during the recomputation. Hence, for each write operation, p i logs the contents of
the page with its version identifier if the corresponding access information entry is found in the volatile
log. In any case, the write operation may cause the invalidation of the previous version of the page in
the active data page space, however, it does not issue any invalidation messages to the other processes
during the recomputation. When opnum i reaches the selected recovery point, p i changes its status from
recovering to normal and resumes the normal computation.
Now, we extend the protocol to handle the concurrent recoveries from the multiple failures. While
a process p i (or p 0
performs the recovery procedure, another process p j in the system can be in the
failed state or it can also be in the recovering status. If p j is in the failed state, it cannot reply back to
the log collection message of p 0
i has to wait until p j wakes up. However, if p j is in the
recovering status, it should not make p i wait for its reply since in such a case, both of p i and p j must end
up with a deadlocked situation. Hence, any message sent out during the recovering status must carry the
recovery mark to be differentiated from the normal ones, and such a recovery message must be taken care
of without blocking, whether the message is for its own recovery or related to the recovery of another
process. However, any normal message, such as the read/write request or the invalidation message, need
not be delivered to a process in the recovering status, since the processing of such a message during the
recovery may violate the correctness of the system.
When
in the recovering status receives a log collection message from another process p 0
reconstructs the access information part of p i 's volatile log from the stable log contents, if it has not done
it, yet. It then replies with the duration jx entries logged at p i to p j . Even though the access information
can be restored from the stable log contents, the data pages which were contained in the volatile log
may have not yet been reproduced. Hence, for each duration jx sent to
records the value
of duration jx .version and the corresponding data page should be sent to p j later as p i creates the page
during the recomputation. Process p j begins the recomputation as the access information is collected
from every process in the system.
As a result, for every data page logged before the failure, the corresponding log entry, duration ix , can
be retrieved from the recovery log, however, the corresponding data page X may not exist in the recovery
log when the process p i begins the recomputation. Note that in this case, the writer of the corresponding
page may also be in the recovery procedure. Hence, p i has to wait until the writer process sends the page
X during the recomputation or it may send the request for the page X using the duration ix .version. In
the worst case, if two processes p i and p j concurrently execute the recomputation, the data pages must be
re-transferred between two processes as they have been done before the failure. However, there cannot
occur any deadlocked situation, since the data transfer exactly follows the scenario described by the
access information in the recovery log and the scenario must follow the sequentially consistent memory
model.
Before the recovery process p i begins its normal computation, it has to reconstruct two more informa-
tion: One is the current operation counter vector and the other is the data page directory. The operation
counter vector can be reconstructed from the vector values received from other processes in the system.
For each V i [j] value, p i can use the value V j [i] retrieved from process p j , and for V i [i] value, it can use its
current opnum i value. The directory includes the ownership and the copy-set information for each data
page it owns. The checkpoint of p i contains the ownership information of the data pages it has owned
at the time of checkpointing. Hence, during the recomputation, p i can reconstruct its current ownership
information as follows: When p i performs a write operation on a data page, it records the ownership of
the page on the directory. When p i reads a new data page from the log, it invalidates the ownership of that
page since the logging means the invalidation. However, the copy-set of the data pages the process owns
can not be obtained. Since the copy-set information is for future invalidation of the page, the process can
put all the processes into the copy-set.
6 The Correctness
Now, we prove the correctness of the proposed logging and recovery protocols.
Lemma 1: The recovery point selected under the proposed recovery protocol is consistent.
Proof: We prove the lemma by a contradiction. Suppose that a process p i recovering from a failure
selects an inconsistent recovery point, say R must have produced a data page X with
version x =i:k, where k ? R i , and there must be another process p j alive in the system, which have read
that page. This means that V j [i] of p j must be larger than or equal to k. Since R i is selected as the
maximum value among the V k [i] values collected, R contradiction occurs. 2
Lemma 2: Under the proposed logging protocol, a log exists for every data access point prior to the
selected recovery point.
Proof: For any data access point, if the page used has been transferred from another process, either it was
logged before it has been transferred (the remote write case) or it is logged when the page is invalidated
(the remote read case). If a data page locally generated is used for a data access point, either a log is
created for the page when the page is invalidated (the remote invalidation case) or the log contents can
be calculated from the next write point (the local invalidation case). In any case, the page which has
not been invalidated before the failure can be retrieved from the current owner. Therefore, for any data
access point, the log of the data page can either be found in the recovery log or calculated from other log
contents. 2
Theorem 1: A process recovers to a consistent recovery line under the proposed logging and recovery
protocols.
Proof: Under the proposed recovery protocol, a recovering process selects a consistent recovery point
(Lemma 1), and the logging protocol ensures that for every data access point prior to the selected recovery
point, a data log exists (Lemma 2). Therefore, the process recovers to a consistent recovery line. 2
7 The Performance Study
To evaluate the performance of the proposed scheme, two sets of experiments have been performed. A
simple trace-driven simulator has been built to examine the logging behavior of various parallel programs
running on the DSM system, and then the logging protocols have been implemented on top of CVM
system to measure the effects of logging under the actual system environments.
Figure
10: Comparison of the Logging Amount (Synthetic Traces)
7.1 Simulation Results
A trace-driven simulator has been built and the following logging protocols have been simulated:
Shared-access tracking(SAT)[23] : Each process logs the data pages transferred for read and write
operations, and also logs the access information of the pages.
Each process logs the data pages produced by itself, and also, for the
data pages accessed, it logs the access information of the pages. In both of the SAT and the RWL
schemes, the data pages and the related information are first saved in the volatile storage and logged
into the stable storage when the process creates new dependency by transferring a data page.
Write-triggered logging(WTL) : This is what we propose in this paper.
The simulation has been run with two different sets of traces: One is the traces synthetically generated
using random numbers and the other is the execution traces of some parallel programs.
First, for the simulation, a model with 10 processes is used and the workload is randomly generated
by using three random numbers for the process number, the read/write ratio, and the page number.
One simulation run consists of 100,000 workload records and the simulation was repeated with various
Figure
11: Comparison of the Logging Frequency (Synthetic Traces)
read/write ratios and locality values. The read/write ratio indicates the proportion of read operations to
total operations. The read/write ratio 0.9 means that 90% of operations are reads and 10% are writes.
The locality is the ratio of memory accesses which are satisfied locally. The locality 0.9 means that 90%
of the data accesses are for the local pages.
The simulation results with the synthetic traces are the ones which most well show the effects of
logging for the various application program types. Figure 10 and Figure 11 show the effects of the
read/write ratio and the locality of the application program on the number of logged data pages and
the frequency of stable logging, respectively. The number in the parenthesis of the legend indicates
the locality. In SAT scheme, after each data page miss, the logging of the newly transferred page is
required. Hence, as the write ratio increases, the large number of data pages become invalidated and the
large number of page misses can occur. As a result, the number of logged pages and also the logging
frequency are increased. However, as the locality increases, the higher portion of the page accesses can
be satisfied locally, and hence the number of data pages to be logged and the logging frequency can be
decreased.
In the RWL scheme, the number of logged data pages is directly proportional to the write ratio and
the number is not affected by the locality, since each write operation requires the logging. However, the
Figure
12: Comparison of the Logging Amount (Parallel Program Traces)
stable logging under this scheme is performed when the process creates a new dependent as in the SAT
scheme, and hence, the logging frequency of the RWL scheme shows the performance which is similar
to the one of the SAT scheme. Comparing the SAT scheme with the RWL scheme, the performance of
the SAT scheme is better when both the write ratio and the locality are high, since in such environments,
there has to be a lot of logging for the local writes in the RWL scheme.
As for the WTL scheme, only the pages being updated are logged and the logging is performed only
at the owners of the data pages. Compared with the SAT scheme in which every process in the copy-set
performs the logging, the number of logged data pages is much smaller and the logging frequency is
much lower in the WTL scheme. Also, in the WTL scheme, there is no logging for the data page with no
remote access and some logging of the write-write precedence order can be delayed. Hence, the WTL
scheme shows much smaller number of logged data pages and much lower logging frequency compared
with the RWL scheme, in which the logging is performed for every write operation. Furthermore, the
logging of data pages for the SAT scheme and the RWL scheme require the stable storage, while for the
scheme, the volatile storage can be used for the logging.
To further validate our claim, we have also used real multiprocessor traces for the simulation. The
traces contain references produced by a 64-processor MP, running the following four programs: FFT,
Figure
13: Comparison of the Logging Frequency (Parallel Program Traces)
SPEECH, SIMPLE and WEATHER. Figure 12 and Figure 13 show the simulation results using the
parallel program traces. In Figure 12, for the programs, FFT, SIMPLE and WEATHER, the SAT scheme
shows the worst performance, because those programs may contain the large number of read operations
and the locality of those reads must be low. However, for the program, SPEECH, the RWL scheme shows
the worst performance, because the program contains a lot of local write operations. In all cases, the WTL
scheme consistently shows the best performance for the amount of the log. Also, considering the logging
frequency shown in Figure 13, for all programs, the WTL scheme shows the lowest frequency.
From the simulation results, we can conclude that our new scheme(WTL) consistently reduces the
number of data pages that have to be logged and also the frequency of the stable storage accesses,
compared with the other schemes(SAT,RWL). The reduction is more than 50% in most of the cases and
it is shown in both synthetic and parallel program traces.
7.2 Experimental Results
To examine the performance of the proposed logging protocol under the actual system environments,
the proposed logging protocol (WTL) and the protocol proposed in [23] (SAT) have been implemented
on top of a DSM system. In order to implement the sequentially consistent DSM system, we use
Application Logging Execution Logging Amount of Logged Number of
Program Scheme Time (sec.) Overhead(%) Information (Bytes) Stable Logging
Table
2: Experimental Results
the CVM(Coherent Virtual Machine) package [12], which supports the sequential consistency memory
model, as well as the lazy release consistency memory models. CVM is written using C++ and well
modularized and it was pretty straightforward to add the logging scheme. The basic high level classes are
CommManager class and Msg class which handle the network operation, MemoryManager class which
handles the memory management, and Page class and DiffDesc class handling the page management.
The protocol classes such as LMW, LSW and SEQ inherit the high level classes and support operations
according to each protocol. We have modified the subclasses in SEQ to implement the logging protocols.
We ran our experiments using four SPARCsystem-5 workstations connected through 10Mbps ethernet.
For the experiments, four application programs, such as FFT, SOR, TSP, WATER, have been run. Table 2
summarizes the experimental results.
The amount of logged information in Table 2 denotes the amount of data pages and access information
which should be logged in the stable storage. For the SAT scheme, the data pages with the size of
4K bytes and the access information should be logged, whereas for the WTL scheme, only the access
information is logged. Hence, the amount of information logged in the WTL scheme is only 0.01%-0.5%
of the one logged in the SAT scheme. The number of stable logging in the table indicates the frequency
of disk access for logging. The experimental results show that the logging frequency in the WTL scheme
Figure
14: Comparison of the Logging Overhead
is only 57%-66% of the one in the SAT scheme. In addition to the amount of logged information and the
logging frequency, we have also measured the total execution times of the parallel programs under each
logging scheme and without logging to compare the logging overhead.
The logging overhead in Table 2 indicates the increases in the execution time under each protocol
compared to the execution time under no logging environment, and the comparison of the logging overhead
is also depicted in Figure 14. As shown in the table, the SAT scheme requires 20%-189% logging
overhead, whereas the WTL scheme requires 5%-85% logging overhead. Comparing these two schemes,
the WTL scheme achieves 55%-75% of reduction in the logging overhead compared to the SAT scheme.
One reason of such reduction is the low logging frequency imposed by the WTL scheme, and the small
amount of log information written under the WTL scheme can also be another reason. However, considering
the fact that the increases in the amount of data pages written per each disk access do not cause
much increases in the disk access time, the 75% reduction in the logging overhead may require another
explanation. One possible explanation is the cascading delay due to the disk access time; that is, the
stable logging delays the progress of not only the process which performs the logging, but also the one
waiting for the data transfer from the process.
Overall, the experimental results show that the WTL scheme reduces the amount of logged information
and the logging frequency compared to the SAT scheme, and they also show that in the actual
system environment, more reductions on the total execution time can be achieved.
Conclusions
In this paper, we have presented a new message logging scheme for the DSM systems. The message
logging has been usually performed when a data page is transferred for a read operation so that the
process does not have to affect other processes in case of the failure recovery. However, the logging to
the stable storage always incurs some overhead. To reduce such overhead, the logging protocol proposed
in this paper untilizes two-level log structure; the data pages and their access information are logged into
the volatile storage of the writer process and only the access information is duplicated into the stable
storage to tolerate multiple failures. The usage of two-level log structure can speed up the logging and
also the recovery procedures with the higher reliability.
The proposed logging protocol also utilizes two characteristics of the DSM system. One is that
not all the data pages read and written have to be logged. A data page needs to be logged only when
it is invalidated by the overwriting. The other is that the data page accessed by multiple processes
need not be logged at every process site. By one responsible process logging the data page and the
related information, the amount of the logging overhead can be substantially reduced. From the extensive
experiments, we have compared the proposed scheme with other existing schemes and concluded that the
proposed scheme always enforces much low logging overhead and the reduction in the logging overhead
is more profound when the processes have more reads than writes. Since disk logging slows down the
normal operation of the processes, we believe that parallel applications would greatly benefit from our
new logging scheme.
--R
Implementing and programming causal distributed shared memory.
Causal memory.
The performance of consistent checkpointing in distributed shared memory systems.
Network multicomputing using recoverable distributed shared memory.
Distributed snapshot: Determining global states of distributed systems.
Lightweight logging for lazy release consistent distributed shared memory.
Coordinated checkpointing-rollback error recovery for distributed shared memory multicomputers
Relaxing consistency in recoverable distributed shared memory.
Reducing interprocessor dependence in recoverable shared memory.
Implementation of recoverable distributed shared memory by logging writes.
CVM: The Coherent Virtual Machine.
A recoverable distributed shared memory integrating coherence and recoverability.
How to make a multiprocessor computer that correctly executes multiprocess pro- grams
Shared virtual memory on loosely coupled multiprocessors.
Distributed shared memory: A survey of issues and algorithms.
Reliability issues in computing system design.
The causal ordering abstraction and a simple way to implement it.
Algorithms implementing distributed shared memory.
Fault tolerant distributed shared memory.
Reduced overhead logging for rollback recovery in distributed shared memory.
Fast recovery in distributed shared virtual memory systems.
Recoverable distributed shared memory.
--TR
--CTR
Taesoon Park , Inseon Lee , Heon Y. Yeom, An efficient causal logging scheme for recoverable distributed shared memory systems, Parallel Computing, v.28 n.11, p.1549-1572, November 2002 | rollback-recovery;checkpointing;distributed shared memory system;fault tolerant system;message logging |
343503 | Interval routing schemes allow broadcasting with linear message-complexity (extended abstract). | The purpose of compact routing is to provide a labeling of the nodes of a network, and a way to encode the routing tables so that routing can be performed efficiently (e.g., on shortest paths) while keeping the memory-space required to store the routing tables as small as possible. In this paper, we answer a long-standing conjecture by showing that compact routing can also help to perform distributed computations. In particular, we show that a network supporting a shortest path interval routing scheme allows to broadcast with an O(n) message-complexity, where n is the number of nodes of the network. As a consequence, we prove that O(n) messages suffice to solve leader-election for any graph labeled by a shortest path interval routing scheme, improving therefore the O(m previous known bound. | INTRODUCTION
This paper addresses a problem originally formulated by
D. Peleg, and that can be informally summarized as follows:
\Does networks supporting shortest path compact routing
schemes present specic ability in term of distributed com-
putation? E.g., broadcasting, leader-election, etc." This pa-
Part of this work has been done while the third author
was visiting the Computer Science Department of University
Paris-Sud at LRI, supported by the Australian-French
ARC/CNRS cooperation #99N92/0523
y Laboratoire de Recherche en Informatique, B^at. 490, Univ.
Paris-Sud, 91405 Orsay cedex, France. Additional support
by the CNRS. http://www.lri.fr/~pierre
z Laboratoire Bordelais de Recherche en Informatique, Univ.
Bordeaux I, 33405 Talence cedex, France. Additional
support by the Aquitaine Region project #98024002.
http://dept-info.labri.u-bordeaux.fr/~gavoille
x Department of Computing, Division of ICS,
Macquarie Univ., Sydney, NSW 2109, Australia.
http://www.comp.mq.edu.au/~bmans
per answers by the a-rmative, by showing that n-node networks
supporting interval routing schemes [26, 27] (IRS for
allow broadcasting with O(n) message-complexity.
More formally, a network E) (in this paper, by
network, we will always mean a connected undirected graph
without loop and multiple edge) supports an IRS if the nodes
of that network can be labeled from 1 to in such a
way that the following is satised: given any node x 2 V of
degree d, there is a set of d intervals
one for each edge ed incident to x, such that (1) V n
implies that there is a shortest path from x to y passing
through the edge e i . So, IRS is a shortest path routing table
having the property that the set of destination-addresses
using a given link is a consecutive set of integers. IRS is a
famous technique for the purpose of compact routing since
a network of maximum degree and supporting an IRS
has its routing table of size O( log n) bits, to be compared
with the (n log ) bits of a routing table returning, for
every destination, the output port corresponding to that
destination. For more about IRS, we refer to [6, 10, 17, 19,
21, 22], and to [16] for a recent survey. For more about
compact routing in general, we refer to [12, 13, 14, 18, 20,
On the other hand, broadcasting is the information dissemination
problem which consists, for an arbitrary node of a
network, to send a same message to all the other nodes.
The message-complexity of broadcasting lies
between
n)
since, on one hand, the reception of
the message by every node but the source requires at least
messages, and, on the other hand, broadcasting can
always be performed by
ooding the network, that is, upon
reception of the message, every node forwards that message
through all its incident edges (apart the one on which it has
received the message). Better upper and lower bounds can
be derived as a function of the knowledge of the nodes of the
network (e.g., see [1, 2, 4]), and of the maximal size of the
message-headers (e.g., see [24]). In the following, the only
knowledge of every node is its label in some IRS, and the
intervals attached to its incident edges in the same IRS. The
size of each message-header transmitted by our broadcasting
protocol is dlog 2 ne bits.
The relationship between IRS and broadcasting was previously
investigated. For instances, van Leeuwen and Tan [27]
proved that minimum spanning tree construction, and therefore
broadcasting, and other related distributed problems
such as leader-election can be solved by exchanging O(n)
messages in a ring labeled by an IRS, and O(m+n) messages
for arbitrary graphs labeled by an IRS. Recall that leader-
election without any network knowledge requires (n log n)
messages for a ring, and (m+n log n) messages for an arbitrary
graph [15]. More generally, the question of how much
a labeling can help in the solution of distributed problems
was studied in [8, 9] in the framework of Sense of Direction
It was shown in [5] that the message-complexity
of the broadcast problem is n 1 in the restricted class of
networks supporting all-shortest-path IRS (an IRS where all
the shortest paths are represented) with additional restrictions
on the intervals (strictness and linearity). Finally, de la
Torre, Narayanan and Peleg [3] showed that the same result
holds in IRS networks satisfying the so-called ssr-tree prop-
erty. (Informally, this property states that, for any node x,
the set of paths induced by the IRS originated at x, and
ending at all the other nodes, is a tree.) In this paper, we
improve these results by showing that a network supporting
standard shortest path IRS supports a broadcast protocol
of message-complexity (n).
Our result has many consequences on other problems, such
as leader-election or distributed spanning tree. For instance
Korach et al. [23] have shown that the leader-election problem
can be solved using (b(n)+n)(log
b(n) is the message-complexity of broadcasting in an n-node
networks. Therefore, n-node networks supporting shortest
path IRS allow the leader-election problem to be solved with
O(n log n) message-complexity (instead of O(m+n log n) for
arbitrary networks). In fact, we prove that O(n) messages
su-ce to solve leader-election for any graphs labeled by a
shortest path IRS (see [11] for a proof), improving therefore
the O(m+n) previous bound of van Leeuwen and Tan [27].
The paper is organized according to several hypotheses on
the IRS. These hypotheses will be relaxed while going further
and further in the paper. We can indeed distinguish
two types of intervals: an interval [a;
b a is said to be linear, whereas an interval [a; b] with
b < a refers to the set bg, and is said to be
cyclic. The class of networks supporting linear IRS (LIRS
for short), that is such that all intervals of the IRS are lin-
ear, is strictly included in the class of networks supporting
an IRS which possibly includes cyclic intervals. We can also
distinguish the case V
I i for every node x, from
the case in which x appears in the interval of one of its incident
edges, for at least one node x. In the former case,
the IRS is said to be strict. Hence we get four types of
interval routing schemes: IRS, LIRS, strict IRS, and strict
LIRS. The following section presents some preliminary re-
sults. Section 3 is dedicated to networks supporting strict
LIRS. Sections 4 and 5 then successively relax the strictness
requirement and the linearity requirement, in order to
present our main result given in Theorem 1.
2. PRELIMINARY RESULTS
In this section, we will present a distributed broadcast protocol
for a network supporting an IRS. This protocol is very
simple though very e-cient since its message-complexity
will be shown to be at most O(n). It is called the up/down
protocol. The second part of the section presents tools for
the analysis of this protocol.
2.1 The up/down broadcast protocol
Let be the source of the broadcast, identied by its label
in the IRS. The source initiates the broadcast by sending
two copies of the message, one destinated to node 1,
and the other destinated to node + 1. The latter copy is
called the \up" copy whereas the former is called the \down"
copy. (Obviously, if source sends
only one copy.) There will be at most two copies of the
message circulating in the network. Let us concentrate on
the up copy destinated to +1. The message will eventually
reach by the shortest path set by the IRS. It may
possibly cross intermediate nodes, but these nodes will just
forward the message to its destination stored in the message-
header, without taking care of its content. Once the node
receives the message, it reads its content, modies the
header by replacing
the node labeled 2. More generally, once a node labeled
receives the message destinated to x, it reads
its content, modies the header by replacing x by x
and forwards it toward the node labeled x + 1. When the
node labeled n receives the message, it reads its content, and
removes it from the network. The same strategy is applied
for the down copy, by replacing x
until it reaches node labeled 1 which reads its content and
removes it from the network. Again, at every given time, a
copy of the message is destinated to only one specic node,
called the target, and when a node dierent from that target
receives the message, it does not take the opportunity to
read its content, but just forwards the message to the target
along the shortest path set by the IRS that leads to that
target.
Clearly, the message-complexity of the up/down protocol is
equal to
denotes the distance
between the node labeled x and the node labeled y in G.
Unfortunately, up to the knowledge of the authors, there is
no good bounds on networks supporting IRS.
Let us just point out that it is known [5] that
for networks supporting all-shortest-path strict LIRS that is
for networks such that the strict LIRS encodes all shortest
paths. The class of networks supporting all-shortest-path
strict LIRS is very restricted (see [16]), and, as we will see,
only weaker results can be proved for (single shortest path)
LIRS or IRS.
Assume that the source is labeled 1, and let
be the sequence of nodes visited by the message from its
source w1 (labeled 1) to its nal destination w (labeled n).
The same node may appear several times in W . However, a
node x can only appear once as a target node, and thus the
possible other occurrences of node x correspond to steps of
the protocol in which x is traversed by a message destinated
to a target node y 6= x. Let us complement the sequence
W by two virtual nodes, These
two nodes are not target nodes. More precisely, W can be
as:
is a maximal sequence of consecutive
target nodes, and Y i , is the sequence of non
target nodes between the last node of X i and the rst node
of X i+1 . In particular, . The nodes
of the Y i 's are called intermediate nodes.
Notation. For every i, throughout the paper, we will always
make use of the following
the last node of X i , y the last node of Y i , and z
the rst node of Y i .
2.2 Intermediate nodes sequences
The next lemmas give some properties satised by the intermediate
nodes in the Y i 's. These properties will be shown
to be helpful to compute the message-complexity of the
up/down protocol.
Lemma 1. Let G be a network supporting a strict LIRS.
x 1) be the shortest path from the node labeled x to the
node labeled x to node labeled x 1) in a strict
LIRS. Then u1 > u2 > >
< vk 1 < x 1).
Proof. Let I i be the interval of edge at
We have x by denition of the
path
is the unique shortest from u i to u i+1 . On the
other hand,
by denition of the path
because the
LIRS is strict. Therefore, since the intervals are all linear,
Similarly, one can show
that
Lemma 1 states that, in strict LIRS networks, once a target
node x has received the message in the up (resp. down)
protocol, no node of label smaller (resp. greater) or equal
to x will be visited anymore.
Corollary 1. In a network supporting a strict LIRS,
ig.
The next lemma analyzes the relationship between consecutive
sequences of intermediate nodes. Its aim is to answer
the following: given an intermediate node w i 2 W , where
can we nd another intermediate node w j 2 W , j > i, whose
label is smaller than the label of w i ? (Lemma 1 answers
this question if w i is not the last node of an intermediate
sequence Yr .)
For any set of nodes S V , and any node u 2 V , we
denote by d(u; S) the distance between u and S, that is
below says that if w i
is the last node of the sequence Yr 1 (i.e., d(w
and if 'r > kr (i.e., jYr j > jXr j), then the (kr + 1)-th node
of Yr is smaller than w i . In other words w
1). The look for such a w j when 'r kr is
more complex. It depends mainly on the relative lengths of
the Ys 's and Xs 's for s r.
Lemma 2. Let G be a network supporting a strict LIRS,
and let u be a node of G. Let r
Assume that there exists s r such that
ks
every intermediate node w
Proof. First note that
implies that
's . Note also that
implies that
ks 2. Let us show by induction on i, that
for every
Note that Let I1 be the interval on the edge
at xr , and, for i > 1, let I i be the interval on the
edge . For every i 1, we have xr +1 2 I i ,
and
I i . From Lemma 1, we also have u1 > >
and u u ' r . Thus, since u i 2 I i , we get that u 2 I i for
every Therefore, a shortest path from xr to u
goes through . As a consequence 'r
Assume now that d(u; y
some i, r i < s 1, and let us show that d(u; y i+1)
(- 1)+
For the same reasons as for the case
we get that a shortest path from the last node x i+1
of X i+1 to u goes through all nodes of Y i+1 . We get that
and thus d(u; y i+1) (- 1)+
the proof of Equation 2.
A consequence of Equation 2 is that
ks
So now, let I1 be the interval on edge (xs ; v1 ) at xs , and,
be the interval on edge (v
and thus, d(u; xs) > 1, a contradiction. Thus u v i for
some i 1, and therefore u > v by Lemma 1. This
completes the proof of Lemma 2.
Now, we know enough to start the analysis of the up/down
protocol.
3. STRICT LINEAR INTERVAL ROUTING
SCHEMES
In this section, we show that the message-complexity of the
up/down protocol is at most 3n on a network of order n
supporting a strict LIRS.
3.1 Partition: a sequence-decomposition algorith
Assume rst that the source is the node 1. We make use
of the sequence W as displayed in Equation 1. Since the
total number of target nodes is n, we have
n. Therefore, the aim of the rest of the proof is to bound
the total number
of intermediate nodes. For that
purpose, we will partition the intermediate nodes of W in
three types of (pairwise disjoint) subsequences. The result
of this partition is called the sequence-decomposition. More
precisely, the sequence-decomposition will be composed of
an active path, of dead-end paths, and of jumped paths. The
active path is built starting a walk from w0 up to w+1 .
Along the construction of the sequence-decomposition, some
parts of the active path become dead-end paths. At the end
of the decomposition, the two extremities of the active path
are w0 and w+1 , and the total number of nodes kept in
the active path will be at most O(n). Also, at the end of
the construction, the sum of the lengths of the dead-end
paths, and the sum of the lengths of the jumped paths will
be both bounded by O(n). Therefore, the total number of
intermediate nodes will be O(n), and jW O(n). A careful
analysis of the constants will actually show that jW j 3n.
The sequence-decomposition is performed by visiting all intermediate
nodes of the sequence W from w0 to w+1 , constructing
in this way the active path. One may \jump" over
some intermediate nodes if the length of the jump can be
bounded by the number of jumped targets. The result of
a jump is a jumped path. One may also backtrack along
the active path for a bounded number of nodes. The result
of a backtrack is a dead-end path. The decomposition
requires two parameters: the mark m, and the direction d.
The role of the mark is to keep in mind the progression along
the sequence W . In general, the mark indicates the current
position, or, more precisely, the index i of the current set
Y i . The mark is an important parameter because of the
backtracks. When a backtrack occurs, the mark is set to
remember the current maximum position ever reached. The
direction indicates whether a backtrack recently occurred.
In that case
Initially, the active path is reduced to w0 , and there is neither
a dead-end path nor a jumped path. The mark m
is set to 0, and the direction is set to +1. The construction
of the sequence-decomposition is precisely described in
Algorithm 1. The explanations of the several steps of the
construction are given below.
In Case 1, the construction is currently visiting some Yr .
While the last node of this sequence of intermediate nodes
is not reached, the active path is updated by adding all the
forthcoming nodes of the current sequence. Informally, from
Lemma 1, the size of the active path will not increase too
much since the labels of the nodes are in a strictly decreasing
order. Case 2 happens in particular when the last node of
the current sequence Yr is reached. The denition of s is
motivated by Lemma 2. If s does not exist, the construction
stops. If s does exist, then we make a jump in the sequence
W as explained thereafter. (Note that, like in Lemma 2,
In Case 2.1.1, we simply jump at the next
intermediate node w whose label is smaller than the label of
the current node. Case 2.1.2 can be seen as an extremal case
Algorithm 1 One step of the construction of the sequence-
decomposition for a strict LIRS
The current active path is
r be such that p t 2 Yr , and let
Case 1: +1. Then the active path is
updated to There is neither a new
dead-end path nor a jumped path. The mark and the direction
are not modied.
Case 2:
be the smallest index such that
does not exist, then the active path is
updated to there is no new dead-end
path, all intermediate nodes between the last node of Ym
and w +1 form a new jumped path, and the construction
stops. If s exists, then let ks
Assume that two cases are considered:
Case 2.1:
again two cases may
occur.
Case 2.1.1: There is an intermediate node w in
Then pick the rst node w of that type. The active
path is updated to
intermediate nodes between the last node of Ym
and w form a jumped path. Assume w 2 Y ,
then the mark m is set to .
Case 2.1.2: For every intermediate node w in
. Then the
active path is updated to p0 ;
all intermediate nodes between the last node of
Ym and v forms a jumped path. The mark m is
set to s.
In both cases, there is no new dead-end path, and the
direction d is set to +1.
Case 2.2:
. Then the direction d is set
to 1. Let t 0 1g be the largest index
such that p t
forms a dead-end path. Assume
> m+ 1, all intermediate nodes of [ 1
a jumped path. The mark m is updated to 1.
of Case 2.1.1. We know from Lemma 2 that jumping at v
is ne since the label of that node is smaller than the label
of the current node p t . The mark is updated to contain the
index of the sequence Y of intermediate nodes reached after
the jump. Case 2.2 is a particular case because a backtrack
occurs. This backtrack is motivated by the fact that one
cannot nd an intermediate node with a label smaller that
the label of p t . Informally, we backtrack along the active
path reach a node p t 0 , t 0 < t, for which
Lemma 2 can be applied. Note that t 0 is well dened since
ng.
Lemma 3. The construction of the sequence-decomposition
given in Algorithm 1 produces a set of paths in which every
intermediate node appears exactly once.
The proof is based on the following claim.
Claim 1. If
Proof. This claim initially holds. Assume the claim
holds before some step i, and consider the several cases of
Algorithm 1. In Case 1, p t+1 will still be in Ym , and d
will still be +1. In Case 2.1, the direction is set to +1 and
by denition of the setting of the mark m in both
Cases 2.1.1, and 2.1.2. Case 2.2 sets d to 1. So the claim
holds after step i too.
The proof of the lemma is then as follows:
Proof. After every step i, let us dene m i as the resulting
mark, and
last node of Ym i otherwise.
We claim that, at every step i, all intermediate nodes before
appears exactly once in the sequence-decomposition.
This claim initially holds. Assume it holds before step i.
In Case 1, from Claim 1, q . The active
path is upgraded by adding the next node of the sequence,
thus the claim holds after step i. In
Case 2.1, m i is set such that p t+1 2 Ym i , and thus q
Every intermediate node between the last node of Ym
that is q are put in a jumped path, so the
claim holds after step i. In Case 2.2, some part of the active
path becomes a dead-end path. By the setting of m i , all the
intermediate nodes not yet assigned to any type of paths,
that is all intermediate nodes in [ m i
are put in a
jumped path. Thus every intermediate node between q
and q i are put in a jumped path. Therefore, after step i,
all intermediate nodes before q i appears exactly once in the
sequence-decomposition.
We complete the proof of the lemma by noticing that, after
every step, either the active path is increased, or a part of
the active path is put in a dead-end path. By the setting
of the mark, an intermediate-node put in a dead-end path
or in a jumped path will not be considered anymore in the
further steps. Therefore the construction of the sequence-
decomposition of Algorithm 1 ends after a nite number of
steps.
3.2 Message-complexity of the up/down pro-
tocol
From the sequence-decomposition algorithm, let us compute
the message-complexity of the up/down protocol.
Lemma 4. The number of intermediate nodes in the active
path is at most n, excluding w0 and w+1 . More pre-
cisely, the labels of the nodes of the active path, including
w0 and w+1 , form a decreasing sequence in n
Proof. To prove that lemma, let us go again through the
several cases of a step of the decomposition. Let p t be the
last node of the current active path. In Case 1, Lemma 1
insures that p t+1 < p t . In Case 2, if s does not exist, then
the next node of the active path is which is smaller
than every other nodes of the current active path. So, in the
remaining of this proof, we assume that s does exist. Since
Case 2.2 does not add new node to the active path, we focus
on case 2.1. In Case 2.1.1, the new node added to the active
path is, by denition, smaller than p t . In Case 2.1.2, v < p t
by application of Lemma 2, which completes the proof of
that lemma.
Lemma 5. The number of nodes in the dead-end paths is
at most n.
Proof. A dead-end path is composed of a sequence of
nodes that were formerly in the active path, and through
which a backtrack occurred. Every backtrack is driven by
a set of target nodes [
Similarly, let
From Lemma 4, the number of nodes of the dead-end path
corresponding to
because the active path, traversed in the
reverse direction, produced a sequence of nodes of increasing
labels. Now, m is updated to 1 after the backtrack, and
thus not be considered anymore for the
counting of the number of nodes in the dead-end paths. X
may be considered again in another dead-end path since m 0
can be equal to m. However, only the k i last nodes of
X will be involved. As a consequence, the total number of
nodes in dead-end paths is bounded by
2. Assume that two jumps J 0 and J successively
occurred at x (that is J 0 occurred, and then later a backtrack
led back to x, and J occurred). Let y
and y 2 Y be the respective extremities of J 0 and J, and
assume y be the setting
of the mark and the distance when J
Proof. By denition
by the setting of m when backtracking through J 0 as
On the other hand, by the same
arguments as for Equation 2, d(y m 0
More generally, d(y
which completes the proof of the claim.
Lemma 6. The number of nodes in the jumped paths is at
most n.
Proof. As dead-end paths, the jumped paths are characterized
by disjoint sets of target nodes of the form [ s
Let us rst consider the jumped paths created by application
of Case 2.1 of Algorithm 1. Assume that a jump J
occurred between x 2 Yr , and y 2 Y , r < s. If
< s, then the number of nodes of the jumped path is at
most
then assume that
Thus, the number of nodes of the
corresponding jumped path is at most
(- 1)+
j. So, in any case, the number of nodes
of the jumped path is at most (-
setting corresponds to another jump J 0
occurred at x, say between x and y
be respectively the value of the mark
and of the distance when J 0 occurred. From Claim 2, we
have
On the other hand, the size of the jump at y 0 is
It yields that the number of node for the two
jumps is at most (- 0 1)+
can repeat for - 0 what we did for -, until we have considered
all jumps that successively occurred at x. It yields that the
total number of nodes of the set of jumped paths occurring
at the same node x is at most
contains
the other extremity of the last jump occurred at x.
The analysis is the same for the jumped paths created in
Case 2.2 by proceeding as if the jump occurred between p t 0
and the last node of Y 1 (recall that p t 0
We conclude the proof by noticing that due to the setting
of the mark m, no two jumps occur above the same set of
target nodes, and therefore the total number of nodes is at
most
Combining Lemmas 4, 5, and 6 with Lemma 3 allows to
conclude that the total number of intermediate nodes is at
most 3n. We can actually improve this upper bound to 2n.
Let x be the extremity of the active path when s cannot be
dened, that is when
nodes of the active
path belong to [ %
by application of Lemma 4. By
application of the same lemma, the total number of nodes
in the active path is at most
j. Since the sets
were not used to bound the total number of
dead-end paths, we can conclude that the sum of the number
of nodes in the active path plus the number of nodes in the
dead-end paths is at most n. From all what precede, the
total number of nodes in the sequence-decomposition, and
therefore the total number of intermediate nodes, is at most
2n. Therefore, the total number of nodes in the sequence
W is at most 3n, and hence the message-complexity of the
up/down protocol is at most 3n.
If the source node is not the node labeled 1, then let > 1
be the label of the source. From Lemma 1, the message
complexity of the copy going upward is at most 3(n )
whereas the message-complexity of the copy going downward
is at most 3.
To summarize, we get:
Property 1. The message-complexity of the up/down
broadcast protocol is at most 3n in a network of order n
supporting any strict LIRS.
Note that K2;n 2 , the complete bipartite graph with one
partition of size 2, and another partition of size n 2, supports
a strict LIRS for which the up/down protocol uses
messages.
4. LINEARINTERVALROUTINGSCHEMES
The next lemma is a variant of Lemma 1 adapted to (non
strict) LIRS networks.
Lemma 7. Let G be a network supporting an LIRS. Let
1) be the shortest path from the node labeled x to the node
labeled x+1 (resp. to node labeled x 1) in an LIRS. Then,
for every i, k > i > 1, we have
for every
Proof. Let I i be the interval of edge at
1. By denition, we have x+1 2 I i , and u
For
I i (otherwise there would exist
a shorter path from x to x
More generally, if i 1, then
I i for every j i 1.
Therefore, since all the intervals are linear, u
every 1. The result for the v i 's is obtained in
a similar way.
Corollary 2. In a network supporting an LIRS,
ig.
The dierence between Lemma 1 and Lemma 7 motivates
the following adaptation of Lemma 2 for LIRS networks that
are non necessarily strict.
Lemma 8. Let G be a network supporting an LIRS, and
let u be a node of G. Let r and assume that
Assume that there exists s r such that
r+1)+- for every s 0 , r s 0 < s. Let
2(s r)+-, and let . Assume that u > x for
every
for every intermediate node
Proof. Note rst that,
implies > ks 1, and
implies 's 2. We proceed is a way similar to the proof
of Lemma 2. We show that, for every
r. The shortest path from zr to u set by the IRS
goes through all nodes of Yr . Thus 'r 1
Therefore,
Equation 3 holds for Assume Equation 3 holds for i,
and let us show that it holds for 1. Again, the shortest
path from z i+1 to u set by the IRS goes through all nodes
of Y i+1 , and thus '
and hence d(y
and Equation 3 holds for
A consequence of Equation 3 is that
ks
Thus there is a node in v+1g such that v i u.
Indeed, otherwise, the path set by the IRS from zs to u
would go through thus would not be a
shortest path. Let i be the smallest index in
such that v i u. If i < +1, then v+2 < u from Lemma 7.
and the
result holds.
In order to analyze the up/down protocol in networks supporting
LIRS, we partition the sequence W in slightly different
manner than in the previous section. The formal
decomposition is given in Algorithm 2. The decomposition
for LIRS actually looks very similar to the decomposition
for strict LIRS. A fourth type of path is introduced:
the auxiliary path. This path is motivated by the fact the
active path will include one node every two, according to
Lemma 7. Every node in between two nodes of the active
path is dropped in the auxiliary path. Let us just mention
some dierences appearing at every step of the sequence-
decomposition. Case 1 is roughly the same as Case 1 in
Algorithm 1, that is the decomposition uses the current
intermediate sequence to construct the active path. The
only modication is that one node every two is dropped in
the auxiliary path, and that one may stop at u ' r 1 or u ' r .
Therefore, Case 2 considers also the case
if true, implies that u ' r is put in the auxiliary path. Case 2
also diers from Case 2 in Algorithm 1 by the denition of
both s and . The new settings are adapted from the statement
of Lemma 8. Otherwise, the general structure of the
decomposition for LIRS is the same as Algorithm 1.
We will show the following results. Firstly, the labels of
the nodes of the active path form a non increasing sequence
such that the number
of times that p is at most %. Thus the number
of intermediate nodes in the active path is at most n
This result can be rened by showing that the number of
nodes in dead-end paths or in the active path is at most
direct consequence is that the number of nodes
in the auxiliary path is at most n + % since the number
of nodes in the auxiliary path does not exceed the number
of nodes in the active path or in the dead-end path. The
contribution of the jumped paths is a bit larger since the
number of nodes in the jumped paths is at most n
Therefore,
Lemma 9. The number of intermediate nodes in the active
path is at most n excluding w0 and w+1 . More
precisely, the labels of the nodes of the active path form a
non increasing sequence
such that the number of times that p is at most %.
Proof. From lemma 7, Case 1 insures that p
In Case 2, if s does not exist, then p
assume that s exists. There is no setting of the
active path in Case 2.2. In case 2.1.1, a jump occurs and
by denition. In case 2.1.2, a jump occurs too
and p t+1 p t from Lemma 8. The number of jumps is at
most %.
Lemma 10. The number of nodes in dead-end paths or in
the active path is at most n
Proof. By similar arguments as in the proof of Lemma 5,
and using the same notation, the number of nodes of a dead-end
path corresponding to the sets is at most
plus the number of jumps occurring
in the portion p t 0 of W . It yields a total number
of nodes in dead-end paths of at most
Now, as we observed in the strict LIRS case, if x 2 [ %
is the last node of the active path dierent from w+1 , then
the total number of nodes in the active path is at most
is the number of jumps
of the nal active path. The sets were not
used to enumerate the nodes in the dead-end paths, and the
total number of jumps cannot exceed %. Therefore, the total
number of nodes in the active path or in dead-end paths is
at most n
Lemma 11. The number of nodes in the auxiliary path is
at most n
Proof. This is a direct consequence of Lemma 10 since
the number of nodes in the auxiliary path does not exceed
the number of nodes in the active path or in the dead-end
path. Indeed, at most one node is dropped in the auxiliary
path for every node entering the active path. Some nodes
formerly in the active path become member of a dead-end
path.
Lemma 12. The number of nodes in the jumped paths is
at most n
Proof. Let us rst consider a jump created by application
of Case 2.1. Let J be a jump corresponding to the sets
. Assume that J occurred between x 2 Yr and
then the jump was over at
most
nodes, with
Therefore, in any case, the number of jumped intermediate
nodes is at most (-
then this setting corresponds to another jump J 0 occurred
at x, say between x and y
be the value of the mark, and the value of the
distance, respectively, when this jump occurred. We have
by the
setting of m when backtracking through J 0 as y
Therefore, - d(x; y 0 ), and, by the same arguments as in
the proof of Claim 2,
The size of the jump J 0 is
thus the size of the two jumps is at most
then one can apply on - 0 the same arguments. We
get that the number of nodes of the set of jumped paths occurring
at the same node x 2 Yr is at most 1+
contains the extremity of the last jump
occurred at x.
The analysis is the same for the jumped paths created in
Case 2.2 by proceeding as if the jump occurred between p t 0
and the last node of Y 1 (recall that p t 0
The worst case is reached when every jump is over a single
whose cost in term of nodes is 1
for a total number of nodes at most
Therefore, we get:
Property 2. The message-complexity of the up/down
broadcast protocol is at most 9n in a network of order n
supporting any LIRS.
5. INTERVAL ROUTING SCHEMES
As opposed to linear IRS, the labels of the nodes of an IRS
play all the same role. Therefore, we slightly modify the
up/down protocol to adapt it to IRS networks. If the source
is labeled , 1 < < n, then there is only one copy of the
message, going upward from to n, then from n to 1, and
nally from 1 to 1, rather than two copies, one going
downward from to 1, and the other going upward from
to n. This protocol is denoted by up/1/up. So, as far as the
up/1/up protocol is concerned, one can assume, w.l.g., that
the source is labeled 1.
Again, in order to analyze the up/1/up protocol, we make
use of the sequence W dened in Equation 1. For every
ng 7! ng
by
We have 1. By using the same
techniques as Lemmas 1 and 7, the reader can check that:
Lemma 13. Let G be a network supporting an IRS. Let
1) be the shortest path from the node labeled x to the node
labeled to node labeled x 1) in this IRS.
If the IRS is strict, then
n).
Otherwise, for every i, k > i > 1, we have
and
for every
By using the same techniques as Lemmas 2 and 8, the reader
can also check that:
Lemma 14. Let G be a network supporting an IRS, and
let u be any node of G. Let r and assume that
If the IRS is strict, then assume that there exists s r
such that
1 for every s 0 , r s 0 < s. Let ks
-, and let
and every x 2 X i , Lx i (u) > Lx i (x), and if, for every
every intermediate node w
Otherwise, assume that there exists s r such that
If, for every
every
intermediate node w
then minfLxs (v+1); Lxs (v+2)g Lxs (u).
Therefore, the sequence-decomposition of Algorithm 1, and
its adaptation to LIRS networks described in Section 4 can
be applied by introducing the relabeling Lx 's every time
that a comparison is performed between two labels. The
resulting algorithms are given in Algorithms 3 and 4.
The previous lemmas of this section show that there is no
major dierence between the sequence-decomposition for
LIRS networks and the sequence-decomposition for IRS net-
works. In fact, the up/1/up protocol for IRS and strict IRS
networks satisfy the same properties as the up/down protocol
for LIRS and strict LIRS networks, respectively. The
key of the proof is the following result:
Lemma 15. Assume
Then Lx (p i+1) < Lxr (p i ) in the sequence-decomposition
for strict IRS networks, and Lx (p i+1) Lxr (p i ), where the
maximum number of equality is at most %, in the sequence-
decomposition for IRS networks.
Proof. According to the statement of the lemma, we are
considering Case 2.1 of the sequence decomposition. Hence
precisely
from the
possible setting of the mark after several backtracks ending
at us rst consider the strict IRS decomposition.
We have us assume
that Lxr Then, since from Lemma 13,
1), we get that p i is equal to a target
node in [
contradiction with the hypothesis
of Case 2.1. Therefore Lxr
arguments allow to show that Lx (p i+1) Lxr (p i ) in
the sequence-decomposition for IRS networks.
Therefore, if f(i) denotes the index such that
then the active path p0 ; resulting from the
sequence-decomposition for strict IRS networks satises
any pair (i; j), 1 i < j t. In IRS networks, there might
be up to % equalities in the sequence. As a consequence:
Theorem 1. The message-complexity of the up/1/up
broadcast protocol is respectively at most 3n in a network
of order n supporting any strict IRS, and at most 9n in a
network of order n supporting any IRS. As a consequence,
networks supporting a shortest path Interval Routing Scheme
allows broadcasting with O(n) message-complexity.
Corollary 3. In a network supporting any strict IRS,
the average distance between two nodes labeled by two consecutive
integers is at most 3+O(1=n). In a network supporting
any IRS, the average distance between two nodes labeled by
two consecutive integers is at most 9 +O(1=n).
6.
--R
Optimal broadcast with partial knowledge.
A tradeo
The impact of knowledge on broadcasting time in radio networks.
Broadcast in linear messages in IRS representing all shortest paths.
The complexity of interval routing on random graphs.
Sense of direction: De
On the impact of Sense of Direction on Message Complexity.
Sense of direction in distributed computing.
Interval routing schemes.
Interval Routing Schemes allow Broadcasting with Linear Message-Complexity
Searching among intervals and compact routing tables.
Designing networks with compact routing tables.
A distributed algorithm for minimal spanning tree.
A survey on interval routing.
Compact routing tables for graphs of bounded genus.
The compactness of interval routing.
Lower bounds for compact routing.
On multi-label linear interval routing schemes
A modular technique for the design of e-cient distributed leader nding algorithms
A trade-o between space and e-ciency for routing tables
Labelling and implicit routing in networks.
Interval routing.
--TR
Interval routing
A tradeoff between space and efficiency for routing tables
A trade-off between space and efficiency for routing tables
A modular technique for the design of efficient distributed leader finding algorithms
A trade-off between information and communication in broadcast protocols
Memory requirement for routing in distributed networks
On the impact of sense of direction on message complexity
Worst case bounds for shortest path interval routing
Optimal Broadcast with Partial Knowledge
The Compactness of Interval Routing
A survey on interval routing
A Distributed Algorithm for Minimum-Weight Spanning Trees
The Complexity of Interval Routing on Random Graphs
Sense of Direction in Distributed Computing
Compact Routing Tables for Graphs of Bounded Genus
Lower Bounds for Compact Routing (Extended Abstract)
On Multi-Label Linear Interval Routing Schemes (Extended Abstract)
The Impact of Knowledge on Broadcasting Time in Radio Networks
Searching among Intervals and Compact Routing Tables
--CTR
Cyril Gavoille, Routing in distributed networks: overview and open problems, ACM SIGACT News, v.32 n.1, March 2001 | broadcasting;distributed computing;compact routing;interval routing |
343530 | Token-Templates and Logic Programs for Intelligent Web Search. | We present a general framework for the information extraction from web pages based on a special wrapper language, called token-templates. By using token-templates in conjunction with logic programs we are able to reason about web page contents, search and collect facts and derive new facts from various web pages. We give a formal definition for the semantics of logic programs extended by token-templates and define a general answer-complete calculus for these extended programs. These methods and techniques are used to build intelligent mediators and web information systems. | Introduction
In the last few years it became appearant that there is an increasing need for more intelligent
World-Wide-Web information systems. The existing information systems are mainly document
search engines, e.g. Alta Vista, Yahoo, Webcrawler, based on indexing techniques
and therefore only provide the web user a list of document references and not a set of facts
he is really searching for. These systems overwhelm the user with hundreds of web page
candidates. The exhausting and highly inconvenient work to check these candidates and to
extract relevant information manually is left to the user. The problem gets even worse if
the user has to take comparisons between the contents of web pages or if he wants to follow
some web links on one of the candidate web pages that seem to be very promising. Then
he has to manage the candidate pages and has to keep track of the promising links he has
observed.
To build intelligent web information systems we assume the WWW and its web pages
to be a large relational database, whose data and relations can be made available by the
definition and application of special extraction descriptions (token-templates) to its web
pages. A library of such descriptions may then offer various generic ways to retrieve facts
from one or more web pages. One basic problem we are confronted with is to provide means
to access and extract the information offered on arbitrary web pages, this task is well known
as the process of information extraction (IE). The general task of IE is to locate specific
pieces of text in a natural language document, in this context web pages. In the last few years
many techniques have been developed to solve this problem [1, 6, 10, 11, 15, 21, 30], where
wrappers and mediators fulfill the general process to retrieve and integrate information
from heterogeneous data sources into one information system.
We focus our work on a special class of wrappers, which extract information from web
pages and map it into a relational representation. This is of fundamental interest because
it offers a wide variety of possible integrations into various fields, like relational databases,
spreadsheet applications or logic programs. We call this information extraction process
fact-retrieval, due to logic programming the extracted information is represented by ground
atoms. In this article we present a general framework for the fact retrieval based on our
special wrapper language, called token-templates. Our general aim was to develop a description
language for the IE from semi-structured documents, like web pages are. This
language incorporates the concepts of feature structures [25], regular expressions, unifi-
cation, recursion and code calls, to define templates for the extraction of facts from web
pages.
How does this contributes to logic programming? The key idea of using logic programs
for intelligent web browsing is as follows: Normally the user is guided by his own domain
specific knowledge when searching the web, manually extracting information and
comparing the found facts. It is very obvious that these user processes involve inference
mechanisms like reasoning about the contents of web pages, deducing relations between
web pages and using domain specific background knowledge.Therefore he uses deduction,
based on a set of rules, e.g. which pages to visit and how to extract facts. We use logic
programs in conjunction with token-templates to reason about the contents of web pages,
to search and collect relevant facts and to derive new facts from various web pages. The
logic programming paradigm allows us to model a background knowledge to guide the web
search and the application of the extraction templates. Furthermore the extracted facts in
union with additional program clauses correspond to the concept of deductive databases
and therefore provide the possibility to derive new facts from several web pages. In the
context of wrappers and mediators [30], token-templates are used to construct special wrappers
to retrieve facts from web pages. Logic programs offer a powerful basis to construct
mediators, they normalize the retrieved information, reason about it and depending on the
search task to fulfill, deduce facts or initialize new sub searching processes. By merging
token-templates and logic programs we gain a mighty inference mechanism that allows us
to search the web with deductive methods. We emphasize the well defined theoretical background
for this integration, which is given by theory reasoning [2] [26] in logical calculi,
whereas token-templates are interpreted as theories.
This article is organized as follows: in Section 2 we describe the language of token-
templates for the fact-retrieval from web pages. In Section 3 the integration of token-
templates into logic programs and the underlying T T Calculus is explained. Section 4
describes how logic programming techniques can be used to enhance the fact-retrieval
process with deductive techniques. A practical application of our developed methods, a
LogicRobot to search private advertisements, is briefly presented in Section 5. Related
approaches and conclusions are given in Section 6.
2. A Wrapper Language for Web Documents
In this section we describe our information extraction language, the token-templates. We
assume the reader to be familiar with the concepts of feature structures [25] and unification
2.1. The Fact Retrieval
We split the process of fact-retrieval into several steps, whereas the first step is the preprocessing
of the web page to be analyzed. We transform a web page as shown in Figure 1
into a list of tokens which we will explain in detail in section 2.2. In our existing system
(Section 5) this is done by the lexical analyzer FLEX [17] (by the definition of a FLEX
grammar to build tokens in extended term notation). We want to emphasize that we are not
bound to a special lexical analyzer generator tool like FLEX, any arbitrary tool can be used
as long as it meets the definition of a token.
Figure
1. An advertisement web page
This methods allows us to apply our techniques not only to HTML-documents, but also
to any kind of semi-structured text documents. Because we can construct arbitrary tokens,
our wrapper language is very flexible to be used in different contexts.
After the source code transformation of the web document, the matching and extraction
process takes place. Extraction templates built from tokens and special operators are applied
to the tokenized documents. According to the successful matching of these extraction
templates the relevant information is extracted by means of unification techniques and
mapped into a relational representation. We will now explain the basic element of our web
wrapper language, the token.
2.2. The Token
A token describes a grouping of symbols in a document. For example the text Pentium
90 may be written as the list of tokens:
[token(type=html,tag=b), token(type=word,txt='Pentium'),
token(type=whitespace,val=blank), token(type=int,val=90),
token(type=html end,tag=b)]
We call a feature structure (simple and acyclic feature structure) a token, if and only if it
has a feature named type and no feature value that consists of another feature. A feature
value may consist of any constant or variable. We write variables in capital letters and
constants quoted if they start with a capital letter. Furthermore, we choose a term notation
for feature structures (token), that is different from that proposed by Carpenter in [4].
We do not code the features to a fixed00110011001100110000000000000000000000111111111111111111111100000000000000000000001111111111111111111111
token
type
tag a
html
Graph Notation Extended Term Notation
token(href=X,type=html,tag=a)
Figure
2. Token notations
argument position, instead we extend the
arguments of the annotated term, by the
notation
this offers us more flexibility in the handling
of features. Figure 2 shows the graph
notation of a token and our extended term
notation of it. In the following we denote
a token in extended term notation, simply
token.
2.3. Token Matching
In the following let us assume, that an arbitrary web page transformed into a token list
is given. The key idea is now to recognize a token or a token sequence in this token
list. Therefore we need techniques to match a token description with a token. For feature
structures a special unification, the feature unification was defined in [23]. For our purposes
we need a modified version of this unification, the token-unification.
and
tokens.
Let A 1 and A 2 be the feature-value sets of the tokens T 1 and T 2 that is A
and A
Further let be the set of features T 1 and T 2 have in common.
The terms T 0
are defined as follows: T 0
The tokens T 1 and T 2 are token-unifiable iff the following two conditions hold:
1 is unifiable with T 0
. The most general unifier (mgU) of T 1 and T 2 is the mgU of
2 wrt. the usual definition [16, 13]
If (1) or (2) does not hold, we call T 1 and T 2 not token-unifiable, written T 1
6 tT 2 . We
token-unifiable with T 2 and is the mgU of the unification from
.
The motivation for this directed unification 1 is to interpret the left token to be a pattern to
match the right token. This allows us to set up feature constraints in a easy way, by simply
adding a feature to the left token. On the other hand we can match a whole class of tokens,
if we decrease the feature set of the left token to consist only of the type feature.
Example:
token(type=word, int=X)
href='http://www.bmw.de') with
For ease of notation we introduce an alternative notation t k for a token of type k (the feature
type has value k), that is given by k or k(f are
the features and values of t k where n is the number of features of t k . We call this notation
term-pattern and define a transformation V on term-patterns such that V transforms the term-
pattern into the corresponding token. For example us
the token This transformation exchanges the
functor of the term-pattern, from type k to token and adds the argument to the
arguments. Now we can define the basic match operation on a term-pattern and a token:
be a term-pattern and T a token in extended term
notation. The term-pattern t k matches the token T , t k < T , iff V(t k ) is token-unifiable
with T . The term-match is defined as follows:
Example: For the demonstration of the term-match operation, consider the three examples
mentioned above and the following modification:
word < token(type=word, txt='Pentium') with
word(int=X) 6< token(type=word,txt='Pentium')
html(href=Y) < token(type=html, tag=a, href='http://www.bmw.de')
with
6 BERND THOMAS
2.4. Token Pattern
If we interpret a token to be a special representation of text pieces, the definition of term-matching
allows us to recognize certain pieces of text and extract them by the process of
unification. This means the found substitution contains our extracted information.
But yet we are not able to match sequences of tokens in a tokenized web page. Therefore
we define the syntax of token-pattern, which will build our language to define templates for
the information extraction from web documents.
The language of token-patterns is built on a similar concept as regular expressions are.
The difference is, that the various iteration operators are defined on tokens. Beside these
basic operators we define greedy and moderate operators. These two operator classes
determine the enumeration order of matches. Greedy Operators are: ?; + and . Moderate
Operators are: !; and #. For example a token-pattern like word, matches zero or
arbitrary many tokens of type word, but the first match will try to match as many tokens of
type word as possible (greedy). Whereas the pattern #word will try in its first attempt to
match as less tokens as possible (moderate). This makes sense if a pattern is just a part of a
larger conjunction of patterns. Another advantage is given by the use of unification, which
in fact allows us with the later described concept of recursive token-templates to recognize
context sensitive languages.
In
Figure
2.4 we give an informal definition 2 for the semantics of token-patterns. Assume
that a tokenized document D is given. A match of a token-pattern p on D, p D, returns
a set of triples (MS; RS; ), where MS is the matched token sequence, RS is the rest
sequence of D and is the mgU of the token unifications applied during the matching
process.
We emphasize, that we compute all matches and do not stop after we have found one
successful match, though this can be achieved by the use of the once operator.
Example: Let us have a closer look at the source code of the advertisement web page (D)
shown in Figure 4 and the corresponding token-pattern (p) given in 5. This token-pattern
extracts the item name of the offered object (Item) and the description (Description) of the
item. For this small example our set of matches consists of a set with two tuples, where we
will leave out the matched sequence MS and the rest sequence RS, because we are only
interested in the substitutions of each
2.5. Token-Templates
A token-template defines a relation between a tokenized document, extraction variables and
a token-pattern. Extraction variables are those variables used in a token-pattern, which are of
interest due to their instantiation wrt. to the substitutions obtained from a successful match.
pattern semantics
then the matched sequence is the list containing exactly
one element D(1) and RS is D without the first element. D(n) denotes the
n-th element of the sequence D.
?p1 Matches the pattern p1 once or never; first the match of p1 then the empty
match sequence is enumerated.
!p1 Matches the pattern p1 once or never; first the empty match sequence then
the match of p1 is enumerated.
+p1 Matches the pattern p1 arbitrarily often but at least once; uses a decreasing
enumeration order of the matches according to their length, starting with the
longest possible match.
p1 Matches pattern p1 arbitrarily often but at least once; uses an increasing
enumeration order of the matches according to their length, starting with the
shortest possible match.
p1 Matches the pattern p1 arbitrarily often; uses a decreasing enumeration order
of the matches according to their length, starting with the longest possible
match.
#p1 Matches the pattern p1 arbitrarily often; uses a increasing enumeration order
of the matches according to their length, starting with the shortest possible
match.
tn) The not operator matches exactly one token t in D, if no t
exists, such that t i < t holds. The token are excluded from the
match.
times(n; t) Matches exactly n tokens t.
any Matches an arbitrary token.
once(p1) The once operator 'cuts' the set of matched tokens by p1 down to the first
match of p1 ; useful if we are interested only in the first match and not in all
alternative matches defined by p1 .
Unification of X and the matched sequence of p1 ; only successful if p1 is
successful and if MS of p1 is unifiable with X .
p1 and p2 Only if p1 and p2 both match successfully, this pattern succeeds. The
matched sequence of p1 and p2 , is the concatenation of MS of p1 and
MS of p2 .
p1 or p2 p succeeds if one of the pattern p1 or p2 is successfully matched. The
matched sequence of p is either the matched sequence of p1 or p2 . The
and operator has higher priority than the or operator, (e.g. a and b or c
(a and b) or c)
Extended Token Patterns
token-template t1 to tn (see Section 2.5.1)
functions c1 to cn (see Section 2.5.1)
Figure
3. Language of Token-Patterns
<IMG SRC=img/bmp_priv.gif> Pentium 90 48 MB RAM,
Soundblaster AWE 64, DM 650,-. Tel.: 06743/ 1582
Figure
4. HTML source code of an online advertisement
#any and html(tag = img) and html(tag
and
Figure
5. Token-pattern for advertisement information extraction
Extraction variables hold the extracted information obtained by the matching process of the
token-pattern on the tokenized document.
Definition 3 (Token-Template). Let p be a token-pattern, D an arbitrary tokenized document
and in p. For applying the
substitution to ~v. A token-template r is defined as follows:
Template definitions are written as: template r(D;
r is called the template name, ~v is the extraction tupel and v are called extraction
variables.
Example: Consider the case where we want to extract all links from a web page. Therefore
we define the following token-template:
template link(D; Link; Desc) := #any
The first sub pattern #any will ignore all tokens as long as the next token is of type html and
meets the required features tag = a and href . After the following subexpression matched
and a substitution is found, Link and Desc hold the extracted information. Now further
alternative matches are checked, for example the #any expression reads up more tokens
until the rest expression of the template matches again.
2.5.1. Extending Token-Templates
To be able to match more sophisticated syntactic structures, we extend the token-templates
with the three major concepts of Template Alternatives, Code Calls and Recursion
Template Alternatives: To gain more readability for template definitions we enhanced
the use of the or operator. Instead of using the or operator in a token-pattern like in the
template template t(D;~v) := alternatively define two templates:
template
template
where the templates t 1 and t 2 have the same name. In fact this does not influence the calculation
of the extraction tuples, because we can easily construct this set by the union of t 1 and t 2 .
Code Calls: A very powerful extension of the token-pattern language is the integration
of function/procedure calls within the matching process. We named this extension, code
calls. A code call may be any arbitrary boolean calculation procedure that can be invoked
with instantiated token-pattern variables or unbound variables that will be instantiated by
the calculation procedure. The following example demonstrates the use of a code call to
an database interface function db, that will check if the extracted Name can be found in
the database. On success it will return true and instantiates Birth to the birthday of the
person, otherwise the match fails. In this example we will leave out the token-pattern for
the recognition of the other extraction variables and will simply name them p 1 to
template person(D ; Name;
Especially the use of logic programs as code to be called during the matching process, can
guide the information extraction with additional deduced knowledge. For example, this can
be achieved by a given background theory and facts extracted by preceeding sub-patterns.
Template Calls & Recursive Templates: To recognize hierarchical syntactic structures in
text documents (e.g. tables embedded in tables) it is obvious to use recursive techniques.
Quite often the same sub-pattern has to be used in a template definition, therefore we
extended the token-pattern by template calls. A template call may be interpreted as an
inclusion of the token-pattern associated with the template to be called. For example the
first example template matches a HTML table row existing of 3 columns, where the first two
are text columns and the third contains price information. The terms set in squared brackets
function as template calls. Repeated application of this pattern, caused by the sub pattern
#any, gives us all table entries. The second template demonstrates a recursive template
call, that matches correct groupings of parenthesises.For a more detailed description of the
token-template language the reader is referred to [27].
template table row(D; Medium; Label; P rice) := ( #any
and [text col(Medium); text col(Label); price col(P rice)]
and html end(tag = tr))
template correct paren(D;
(word or word and [correct paren( )] and word )
and paren close )
3. Logic Programs and Token-Templates
In this section we will explain how token-templates can be merged with logic programs
(LP's). The basic idea of the integration of token-templates and LP's is to extend a logic
program with a set of token-templates (extended LP's), that are interpreted as special program
clauses. The resulting logic program can then answer queries about the contents
of one or more web documents. Intuitively token-templates provide a set of facts to be
used in logic programs. Extended LP's offer the possibility to derive new facts based on
the extracted facts from the WWW. From the implementational point of view these token-
template predicates may be logical programs or modules that implement the downloading
of web pages and the token matching. From the theoretical point of view we consider these
template sets to be axiomatizations of a theory, where the calculation of the theory (the
are performed by a background reasoner.
In the following we refer to normal logic programs when we talk about logic programs.
In Section 3.1 we describe how token-templates are interpreted in the context of first order
predicate logic. The extension of a calculus with template theories, which will lead to the
Calculus, is defined in Section 3.2. Section 4.1 and 4.2 will give some small examples
for the use of token-templates in logic programs.
We assume the reader to be familiar with the fields of logic programming [16] and theory
reasoning [2] [26].
3.1. Template Theories
In the context of first order predicate logic (PL1) we interpret a set of token-templates to be
an axiomatization of a theory. A token-template theory T T is the set of all template ground
atoms, that we obtain by applying all templates in T . For example, consider the template
set ft(D; v; p)g. Assume p to be an arbitrary token-pattern and v an extraction tupel. A
template theory for T is given by T ft(D;v;p)g := ft(D; v p)g.
This interpretation of token-templates associates a set of ground unit clauses with a given
set of token-templates. The formal definition is as follows:
Definition 4 (T emplate Theory, T T Interpretation, TT Model). Let T be a set of
token-templates: )g. A token-template
theory T T for T is defined as follows:
Let P be a normal logic program with signature , such that is also a signature for
A Interpretation I is a TT Interpretation iff I
A Herbrand T T Interpretation is a T T Interpretation, that is also a herbrand
interpretation.
A TT Interpretation I is a TT Model for P iff I
Let X be a clause, wrt. X is a logic T T Consequence from P , P
Example: Consider a token-template advertise with the token-pattern given in Figure 5
and the extraction variables Item and Description for the example web source code shown
in
Figure
1. The corresponding template theory for the template advertise is the set:
3.2. The TT Calculus
So far we have shown when a formula is a logical consequence from a logic program and
a template theory. This does not state how to calculate or check if a formula is a logical
consequence from an extended logic program. Therefore we have to define a calculus for
extended logic programs. But instead of defining a particular calculus we show that any
sound and answer-complete calculus for normal logic programs can serve as calculus for
extended logic programs.
K be a sound and answer-complete calculus for normal
logic programs and ' is the derivation defined by K. Let P be a normal logic program, T
a set of token-templates and TT the template theory for T . A query 9 Q with calculated
substitution is TT derivable from P , P ' TT Q, iff Q is derivable from P [ TT ,
calculus K, with TT Derivation is called TT Calculus.
Theorem 1 (Soundness) Let K be a T T Calculus and ' TT the derivation relation
defined by K. Let T be a set of token-templates, T T the template-theory for T and Q a
query for a normal program P . Further let be a substitution calculated by ' TT . Then
Theorem 2 (Completeness) Let K be T T Calculus and' TT the derivation relation
defined by K. Let T be a set of token-templates, T T the template-theory for T and Q a
query for a normal program P . Let be a correct answer for Q, a calculated answer
and
a substitution.
Soundness and Completeness: Let K be a T T Calculus and ' TT the derivation relation
defined by K. Let T be a set of templates , T T the template theory of T and Q a query on
the normal program P . Let be a calculated substitution for Q such that:
sound/completness of '
To prove: P [ TT
logical consequence
Consequence
Example: In Figure 6 an example T T Derivation based on the SLD-Calculus [12] is
shown. The calculation of the template theory is done by a theory box [2], this may be any
arbitrary calculation procedure, that implements the techniques needed for token-templates.
Furthermore this theory boxhas to decide if a template predicate, like institute('http://www.uni-
koblenz.de',Z,P) can be satisfied by the calculated theory. Let us have a closer look at the
logic program P given in Figure 6:
b(uni,'http://www.uni-koblenz.de') A given web page containing some information about
a university.
a(X,Z) b(X,Y), institute(Y,Z,P) An institution X has an department Z if there exists
an web page Y describing X and we are able to extract department names Z from Y .
With this given logic program and the template definition T we can find a SLD T T Derivation,
assumed our template theory is not empty. What this example shows is: modeling knowledge
about web pages by logic programs and combining this with token-templates allows
us to query web pages.
Fact-Retrieval
Goal:
Template
P:
T:
Theory
Applying the template
gives
Answer:
template institute(Document,Z,P) := tokenpattern
Figure
4. Deductive Techniques for Intelligent Web Search
Logic programming and deduction in general offer a wide variety to guide the web search
and fact-retrieval process with intelligent methods and inference processes. This section
describes some of these techniques.
4.1. Deductive Web Databases
Assume we know two web pages of shoe suppliers, whose product descriptions we want
to use as facts in a deductive database. Additionally we are interested in some information
about the producer of the product, his address and telephone number that can be retrieved
from an additional web page. Therefore we define two token-templates, price list and
address. To simplify notation we leave out the exact token-pattern definitions instead we
The following small deductive database allows us to ask for articles and to derive new
facts that provide us with information about the product and the producer. We achieve this
by the two rules article and product, which extract the articles offered at the web pages and
will derive new facts about the article and the producer.
14 BERND THOMAS
template price list(Document ; Article; Price;
template address(Document
web page( 0 ABC Schuhe
web page( 0 Schuhland
article(Supplier
web page(Supplier ; Document);
price list(Document ; Article; Price; ProducerUrl ; Pattern)
product(Supplier
article(Supplier
Example: Here are some example queries to demonstrate the use of the deductive web
database:
Select all products with article name "Doc Martens" that cost less than 100:
rice < 100
Select all products offered at least by two suppliers:
4.2. Optimizing Web Retrieval
The following example shows how a query optimization technique proposed by Levy [19]
can be implemented and used in extended logic programs. To avoid the fetching of senseless
web pages and starting a fact-retrieval process we know for certain to fail, Levy suggests
the use of source descriptions. For the fact retrieval from the WWW this might offer a great
speed up, because due to the network load the fetching of web documents is often very time
intensive. In the context of extended logic programs, we can easily apply these methods,
by the definition of rules, whose body literals define constraints on the head arguments
expressing our knowledge about the content of the web pages. The following example
illustrates these methods:
rice > 20000; P rice < 40000;
rice > 40000; P rice <
template cars(WebPage; P rice; Country)
Assuming we are interested in american cars that costs 50000 dollars, the query prod-
will retrieve the according offers. Because of the additional constraints
on the price and the country given in the body of the rule offer, the irrelevant web page
with german car offers is left out. By simple methods, provided by the logic programming
paradigm for free, we are able to guide the search and fact retrieval in the world wide web
based on knowledge representation techniques [3] and we are able to speed up the search
for relevant information.
4.3. Conceptual Reasoning
Many information systems lack of the ability to use a conceptual background knowledge
to include related topics of interest to their search. Consider the case that a user is interested
in computer systems that cost less than 1000 DM. It is obvious that the system
should know the common computer types and descriptions (e.g. IBM, Toshiba, Compaq,
Pentium, Notebook, Laptop) and how they are conceptually related to each other. Such
knowledge will assist a system in performing a successful search. One way to represent
such knowledge is by concept description languages or in general by means of knowledge
representation techniques. In the last few years it showed up that logic is a well suited
analytic tool to represent and reason about represented knowledge. Many formalisms have
been implemented using logic programming systems, for example PROLOG.
For example a simple relation is a can be used to represent conceptual hierarchies to
guide the search for information. Consider the following small knowledge base:
is a(notebook; computer)
is a(desktop; computer)
is a(X; notebook) notebook(X)
relevant(Q; Z) is a(Z; Q)
relevant(Q; Z) is a(Y; Q); relevant(Y; Z)
Assume our general query to search for computers less than 100 DM is split into a sub
query like relevant(computers; X) to our small example knowledge base. The query
computes a set of answers:
This additional inferred query information can be used in two different ways:
1. The derived conceptual information is used to search for new web pages, e.g. by
querying standard search engines with elements of as search keywords. On the
returned candidate pages further extended logic programs can be applied to extract
facts.
2. The information extraction process itself is enhanced with the derived information by
reducing the constraints on special token features in the token-templates to be applied.
Consider the case where only a single keyword Q is used with a token pattern to
constrain the matching process e.g. word(txt=Q). We can reduce this constrainedness
by constructing a more general (disjunctive) pattern by adding simple term-patterns,
whereas their feature values consist of the deduced knowledge , e.g. word(txt=Q) or
That means we include to the search
all sub concepts (e.g. computer instances (e.g. 0 ThinkPad 0 )
of the query concept.
5. The LogicRobot
This section will give a short overview of an application we implemented using the described
methods and techniques. A brief explanation of a domain dependent search engine for an
online advertisement newspaper is given.
5.1. The Problem
Very often web pages are organized by a chain of links the user has to follow to finally reach
the page he is interested in. Or the information the user wants to retrieve is split into many
pages. In both cases the user has to visit many pages to finally reach the intended page or to
collect data from them. To do this manually is a very exhausting and time consuming work
and furthermore it is very difficult for the user to take comparisons between the information
offered on the various web pages. Therefore an automatic tool to follow all links, to collect
the data and to provide the possibility to compare the retrieved information is needed to
free the user from this annoying work.
We call web information systems based on logic programs and token-templates Logi-
cRobots. Similar to physical robots they navigate autonoumasly through their environment,
the web. According to their ability to analyze and reason on web page contents and the
incorporation of knowledge bases they are able to percept their environment, namely what's
on a web page. Due to the underlyinglogic program and the used AI techniques, e.g. knowledge
representation, default reasoning etc., they act by collecting facts or follow up more
promising links. The problem we focused on was to build a LogicRobot for a web vendor
offering private advertisements. Some of the offered columns forces the user to follow
about 80 to 100 links to see all advertisements, which is of course not very user-friendly.
A more elegant way would be to offer a web form where the user can specify the column
or columns to be searched either by entering a specific name or a keyword for a column
name, a description of the item he is searching for, a price constraint like less, greater or
equal to and finally a pattern of a telephone number to restrict the geographical area to be
searched. Figure 7 depicts the three main templates used for extracting information about
advertisements similar to those shown in Figure 4. The LogicRobot web interface for this
special task is presented in Figure 8 and a sample result page is given in Figure 9.
template price(Price) := once( # any and ? (word(value='VB') or word(value='FP'))
and word(value='DM') and ( (int(value=P) and fcfloat(P,Price)g)
or (float(value=P) and fprice
template telefon(T) := # any and word(value='Tel') and ? punct(value='.')
and ? punct(value=':') and
and ? (punct or op) and +int)))
template product description(Article,Desc) := # any and html(tag=img) and html(tag=b)
and
and
Figure
7. Templates used for telefon number, price and advertised product extraction
Figure
8. The LogicRobot web interface
5.2. Implementational Notes
The LogicRobot for the search of advertisements is based on the logic programming library
TXW3 [28] that implements the techniques presented in this paper using ECLIPSE-Prolog
[8]. ECLIPSE supports modularized logic programming, so we modularized the architecture
of the LogicRobot into two main modules, the first module containing all needed
token-template definitions and the second the prolog program implementing the appropriate
template calls, the evaluation of the price constraints and further control operations. This
prolog module is executed by the CGI mechanism and communicates with the local http
daemon via stdin/stdout ports. So there is no additional server programming or network
software needed to setup a search engine based on extended logic programs.
By only 5 template definitions and approx. 200 lines of prolog code we implemented this
LogicRobot. The tests we carried out with our application are very promising. For example
the query answering time, which contains fetching, tokenizing, extracting and comparing
is beneath 2 minutes for 100 web pages of advertisements (Figure 1) . We think this is a
very promising way for a domain specific search tool, that can be easily extended by AI
methods, that offers a flexible and fast configurability by means of declarative definitions
(e.g. using PROLOG) and most important this concept of LogicRobots can be applied to
various information domains on the World Wide Web.
6. Related Work and Conclusion
We presented the token-template language for the IE from semistructured documents, especially
from web pages. We showed how our wrapper language can be merged with logic
programs and gave a formal definition for the extension of an arbitrary answer complete first
order logical calculus with template theories. In conjunction with the area of logic programming
and deductive databases we can use these wrapper techniques to obtain inferences or
new deductively derived facts based on information extracted from the WWW. Furthermore
these methods can be used to build intelligent web information systems, like LogicRobots,
that gain from the closely related areas like deductive databases, knowledge representation
or logic programming based AI methods. We also showed how already developed
query optimization techniques (Section 4.2), can easily be integrated into our approach.
Our methods have been successfully integrated and used in the heterogeneous information
system GLUE [20] to access web data and integrate it into analytical and reasoning processes
among heterogeneous data sources (e.g. relational databases, spreadsheets, etc. In
addition to our theoretical work we also implemented a logic programming library TXW3
that provides the language of token-templates and various other logic modules to program
LogicRobots for the WWW.
Several web information systems have been developed in the last few years. One class
of applications called Softbots, which are domain specific automated search tools for the
WWW, searching autonomously for relevant web pages and user requested informations,
are similar to our concept of a LogicRobot. But such existing systems like Ahoy! [24] or
Shopbot use either tailored extraction techniques (Ahoy!) that are very domain specific or
their extraction techniques are based on highly restrictive assumptions about the syntactical
structure of a web page (Shopbot). Both systems do not follow the concept of a general
Figure
9. Query result page
purpose extraction language like token-templates are. Token-templates are applicable to
any kind of semistructured text documents, and hence not restricted to a specific domain.
Systems like IM [18] or W3QS [14] also provide means to query web information sources.
Though Levy et. al. also choose a relational data model to reason about data, and show
several techniques for source descriptions or constructing query plans, they leave the problem
of information extraction undiscussed in their work. We showed solutions for both the
extraction of facts and reasoning by extended logic programs. The W3QS system uses a
special web query language similar to the relational database query language SQL. W3QS
uses enhanced standard SQL commands, e.g. by additional external unix program calls or
HTML related commands. Though an additional construction kit for information extrac-
tion processes is given, this seems to be focused only on the detection of hyper links and
their descriptions. The concept of database views for web pages is also introduced, but
no information about recursive views is provided, whereas extended logic programs offer
these abilities.
Heterogeneous information systems, like DISCO [29], GLUE [20], HERMES [22], Infomaster
[9] or TSIMMIS [5] all use special mediator techniques to access web information
sources among other data sources. These systems use their own mediator model (language)
to interface with the special data source wrappers. The system HERMES for example is
based on a declarative logical mediator language and therefore is similar to our approach
using extended logic programs as mediators and token-templates as special wrapper lan-
guage. The advantage of our presented approach is simply, that the above named systems
except TSIMMIS and GLUE do not incorporate a general purpose wrapper language for
text documents. Additionally work on the expressive power of the mediator languages and
the used wrapper techniques of the other systems is of interest.
Different from the template based extraction languages described in [11] and [6] or the
underlying language used in the wrapper construction tool by Gruser et. al. [10], token-
templates incorporate the mighty concepts of recursion and code calls. These concepts allow
the recognition and extraction of arbitrary hierarchical syntactic structures and extends the
matching process by additional control procedures invoked by code calls. Especially logic
programs used as code calls can guide the extraction process with a manifold of AI methods
in general.
Notes
1. In the sense that the feature set of the left token must be a subset of the feature set of the right token.
2. see [27] for a detailed formal definition.
--R
Wrapper generation for semistructured internet sources.
Theory Reasoning in Connection Calculi and the Linearizing Completion Approach.
Principles of Knowledge Representation.
Typed Feature Structures: an Extension of First-order Terms
The TSIMMIS project: Integration of heterogeneous information sources.
Relational Learning of Pattern-Match Rules for Information Extraction
A scalable comparison-shoppingagent for the world-wide web
International Computers Limited and IC-Parc
An Information Integration System.
A wrapper generation toolkit to specify and construct wrappers for web accesible data.
Extracting semistructured information from the web.
Linear Resolution with Selection Function.
A multidisciplinary survey.
A query system for the world-wide web
Wrapper Induction for Information Ex- traction
Foundations of Logic Programming.
Querying HeterogeneousInformation Sources Using Source Descriptions.
A flexible meta-wrapper interface for autonomous distributed information sources
HERMES: Reasoning and Mediator System
An Introduction to Unification-Based Approaches to Grammar
Dynamic reference sifting: A case study in the homepage domain.
Records for Logic Programming.
Automated Deduction by Theory Resolution.
The txw3-module
Scaling Heterogeneous Databases and the Design of Disco.
Mediators in the architecture of future information systems.
--TR
--CTR
Steffen Lange , Gunter Grieser , Klaus P. Jantke, Advanced elementary formal systems, Theoretical Computer Science, v.298 n.1, p.51-70, 4 April | logic robots;softbots;information extraction;template based wrappers;logic programming;mediators;deductive web databases;theory reasoning |
343534 | Clock synchronization with faults and recoveries (extended abstract). | We present a convergence-function based clock synchronization algorithm, which is simple, efficient and fault-tolerant. The algorithm is tolerant of failures and allows recoveries, as long as less than a third of the processors are faulty 'at the same time'. Arbitrary (Byzantine) faults are tolerated, without requiring awareness of failure or recovery. In contrast, previous clock synchronization algorithms limited the total number of faults throughout the execution, which is not realistic, or assumed fault detection. The use of our algorithm ensures secure and reliable time services, a requirement of many distributed systems and algorithms. In particular, secure time is a fundamental assumption of proactive secure mechanisms, which are also designed to allow recovery from (arbitrary) faults. Therefore, our work is crucial to realize these mechanisms securely. | INTRODUCTION
Accurate and synchronized clocks are extremely useful to
coordinate activities between cooperating processors, and
therefore essential to many distributed algorithms and sys-
tems. Although computers usually contain some hardware-based
clock, most of these are imprecise and have substantial
drift, as highly precise clocks are expensive and cumbersome.
Furthermore, even hardware-based clocks are prone to faults
and/or malicious resetting. Hence, a clock synchronization
algorithm is needed, that lets processors adjust their clocks
to overcome the effects of drifts and failures. Such algorithm
maintains in each processor a logical clock, based on the local
physical clock and on messages exchanged with other
processors. The algorithm must deal with communication
delay uncertainties, clock imprecision and drift, as well as
link and processor faults.
In many systems, the main need for clock synchronization is
to deal with faults rather than with drift, as drift rates are
quite small (some works e.g. [11, 12] actually ignore drifts).
It should be noted, however, that clock synchronization is
an on-going task that never terminates, so it is not realistic
to limit the total number of faults during the system's
lifetime. The contribution of this work is the ability to tolerate
unbounded number of faults during the execution, as
long as 'not too many processors are faulty at once'. This
is done by allowing processors which are no longer faulty to
synchronize their clocks with those of the operational processors
Out protocol withstands arbitrary (or Byzantine) faults,
where affected processors may deviate from their specified
algorithm in an arbitrary manner, potentially cooperating
maliciously to disrupt the goal of the algorithm or system.
It is obviously critical to tolerate such faults if the system
is to be secure against attackers, i.e. for the design of secure
systems. Indeed, many secure systems assume the use
of synchronized clocks, and while usually the effect of drifts
can be ignored, this assumption may become a weak spot
exploited by an attacker who maliciously changes clocks.
Therefore, solutions frequently try not to rely on synchronized
clocks, e.g. use instead freshness in authentication
protocols (e.g. Kerberos [22]). However, this is not always
achievable as often synchronized clocks are essential for efficiency
or functionality. In fact, some security tasks require
securely synchronized clocks by their very definition, for example
time-stamping [14] and e-commerce applications such
as payments and bids with expiration dates. Therefore, secure
time services are an integral part of secure systems such
as DCE [25], and there is on-going work to standardize a secure
version of the Internet's Network Time Protocol in the
IETF [28]. (Note that existing 'secure time' protocols simply
authenticate clock synchronization messages, and it is
easy to see that they may not withstand a malicious attack,
even if the authentication is secure.)
The original motivation for this work came from the need to
implement secure clock synchronization for a proactive security
toolkit [1]: Proactive security allows arbitrary faults
in any processor - as long as no more than f processors are
faulty during any (fixed length) period. Namely, proactive
security makes use of processors which were faulty and later
recovered. It is important to notice that in some settings it
may be possible for a malicious attacker to avoid detection,
so a solution is needed that works even when there is no
indication that a processor failed or recovered. To achieve
that, algorithms for proactive security periodically perform
some 'corrective/maintenance' action. For example, they
may replace secret keys which may have been exposed to
the attacker. Clearly, the security and reliability of such periodical
protocols depend on securely synchronized clocks,
to ensure that the maintenance protocols are indeed performed
periodically. There is substantial amount of research
on proactive security, including basic services such as agreement
[24], secret sharing [23, 17], signatures e.g. [16] and
pseudo-randomness [4, 5]; see survey in [3]. However, all of
the results so far assumed that clocks are synchronized. Our
work therefore provides a missing foundation to these and
future proactive security works.
1.1 Relations to prior work
There is a very large body of research on clock synchroniza-
tion, much of it focusing on fault-tolerance. Below we focus
on the most relevant works.
A number of works focus on handling processor faults, but
ignore drifts. Dolev and Welch [11, 12] analyzed clock synchronization
under a hybrid faults model, with recovery from
arbitrary initial state of all processors (self stabilization) as
well as napping (stop) failures in any number of processors
in [11] or Byzantine faults in up to third of the processors
in [12]. Both works assume a synchronous model, and synchronize
logical clocks - the goal is that all clocks will have
the same number at each pulse. Our results are not directly
comparable, since it is not clear if our algorithm is
self stabilizing. (In our analysis we assume that the system
is initialized correctly.) On the other hand, we allow Byzantine
faults in third of the processors during any period, work
in asynchronous setting, allow drift and synchronize to real
time.
The model of time-adaptive self stabilization as suggested
by Kutten and Patt-Shamir [18] is closer to ours; there, the
goal is to recover from arbitrary faults at f processors in
time which is a function of f . We notice this is a weaker
model than ours in the sense that it assumes periods of no
faults. A time-adaptive, self-stabilizing clock synchronization
protocol, under asynchronous model, was presented by
Herman [15]. This protocol is not comparable to ours as it
does not allow drifts and does not synchronize to real time.
Among the works dealing with both processor faults and
drifts, most assume that once a processor failed, it never re-
covers, and that there is a bound f on the number of failed
processors throughout the lifetime of the system. Many such
works are based on local convergence functions. An early
overview of this approach can be found in Schneider's report
[26]. A very partial list of results along this line includes [13,
7, 8, 9, 21, 2, 20]. The Network Time Protocol, designed by
Mills [21], allows recoveries, but without analysis and proof.
Furthermore, while authenticated versions of [21] were pro-
posed, so far these do not attempt recovery from malicious
faults.
Our algorithm uses a convergence function similar to that
of Fetzer and Cristian [9] (which, in turn, is a refinement
of that of Welch and Lynch [20]). However, it seems that
one of the design goals of the solution in [9] is incompatible
with processor recoveries. Specifically, [9] try to minimize
the change made to the clocks in each synchronization oper-
ation. Using such small correction may delay the recovery of
a processor with a clock very far from the correct one (with
[9] such recovery may never complete). This problem accounts
for the difference between our convergence function
and the one in [9]. In the choice between small maximum
correction value, and fast recovery time, we chose the latter.
Another aspect in which [9] is optimal, is the maximum
logical drift (see Definition 3 in Section 2.3). In their so-
lution, the logical drift equals the hardware drift, whereas
in our solution there is an additive factor of O(2 \GammaK ), where
K is the number of synchronization operations performed
in every time period. (Roughly, we assume that less than a
third of the processors are faulty in each time period, and
require that several synchronization operations take place
in each such period.) As our model approaches that of [9]
(i.e., as the length of the time period approaches infinity),
this added factor to the logical drift approaches zero. We
conjecture that 'optimal' logical drift can not be achieved in
the mobile faults model.
Another difference between our algorithm and several traditional
convergence function based clock synchronization
algorithms, is that many such solutions proceed in rounds,
where correct processes approximately agree on the time
when a round starts. At the end of each round each processor
reads all the clocks and adjusts its own clock accordingly.
In contrast to this, our protocol (and also NTP [21]) do not
proceed in rounds. We believe that implementing round
synchronization across a large network such as the Internet
could be difficult.
A previous work to address faults and recoveries for clock
synchronization is due to Dolev, Halpern, Simons and Strong
[10]. In that work it is assumed that faults are detected. In
practice, faults are often undetected - especially malicious
faults, where the attacker makes every effort to avoid detection
of attack. Handling undetected faults and recoveries is
critical for (proactive) security, and is not trivial, as a recovering
processor may have its clock set to a value 'just a
bit' outside the permitted range. The solution in [10] rely
on signatures rather than authenticated links, and therefore
also limit the power of the attacker by assuming it cannot
collect too many 'bad' signatures (assumption A4 in [10]).
The algorithm of Dolev et al. [10] is based on broadcast,
and require that all processors sign and forward messages
from all other processors. This has several practical disadvantages
compared to local convergence function based
algorithms such as in the present paper. Some of these dis-
advantages, which mostly result from its 'global' nature, are
discussed by Fetzer and Cristian in [13]. Additional practical
disadvantages of broadcast-based algorithms include
sensitivity to transient delays, inability to take advantage of
realistic knowledge regarding delays, and the overhead and
delay resulting from depending on broadcasts reaching the
network (e.g. the Internet). On the other hand, being
a broadcast based algorithm , Dolev et al. [10] require
only a majority of the processors to be correct (we need
two thirds). Also [10] only requires that the subnet of non
faulty processors be connected, rather than demanding a
direct link between any two processors. (But implementing
the broadcast used by [10] has substantial overhead and requires
two-third of processors to be correct and connected.)
1.2 Informal Statement of the Requirements
A clock synchronization algorithm which handles faults and
recoveries should satisfy:
Synchronization Guarantee that at all times, the clock
values of the non-faulty processors are close to each
other. 1
Accuracy Guarantee that the clock rates of non-faulty processors
are close to that of the real-time clock. One
reason for this requirement is that in practice, the set
of processors is not an island and will sometimes need
to communicate and coordinate with processors from
the "outside world".
Recovery Guarantee that once a processor is no longer
faulty, this processor recovers the correct clock value
and rejoins the "good processors" within a fixed amount
of time.
We present a formalization of this model and goals, and a
simple algorithm which satisfies these requirements. We analyze
our algorithm in a model where an attacker can temporarily
corrupt processors, but not communication links.
It may be possible to refine our analysis to show that the
same algorithm can be used even if an attacker can corrupt
both processors and links, as long as not too many of either
are corrupted "at the same time".
2. FORMAL MODEL
2.1 Network and Clocks
Our network model is a fully connected communication graph
of n processors, where each processor has its own local clock.
We denote the processor names by and assume that
1 Note that a trivial solution of setting all local clocks to a
constant value achieves the synchronization goal. The accuracy
requirement prevents this from happening.
each processor knows its name and the names of all its neigh-
bors. In addition to the processors, the network model also
contains an adversary, who may occasionally corrupt processors
in the network for a limited time. Throughout the
discussion below we assume some bound ae on the clock drift
between "good processors", and a bound - on the time that
it takes to send a message between two good processors. We
refer to ae as the drift bound and to - as the message delivery
bound.
We envision the network in an environment with real time.
A convenient way of thinking about the real time is as just
another clock, which also ticks more or less at the same rate
as the processors' clocks. For the purpose of analysis, it is
convenient to view the local clock of a processor p as consisting
of two components. One is an unresettable hardware
clock Hp , and the other is an adjustment variable adj p , which
can be reset by the processor. The clock value of p at real
time - , denoted Cp (-) is the sum of its hardware clock and
adjustment factor at this time, Cp
are the same notations as in [26]). 2 We stress that Hp and
adj p are merely a mathematical convenience, and that the
processors (and adversary) do not really have access to their
values. Formally, the only operations that processor p can
perform on Hp and adj p are reading the value Hp(-)
and adding an arbitrary factor to adj p . Other than these
changes, the value of Hp changes continuously with - (and
the value of adj p remains fixed).
(Clocks). The hardware clock of a processor
p is a smooth, monotonically increasing function, denoted
Hp(-). The adjustment factor of p is a discrete function
(which only changes when p adds a value to the
its adjustment variable). The local clock of p is defined as
We assume an upper bound ae on the drift rate between
processors' hardware clocks and the real time. Namely, for
any -1 ! -2 , and for every processor p in the network, it
holds that
(2)
We note that in practice, ae is usually fairly small (on the
2.2 Adversary Model
As we said above, our network model comes with an adver-
sary, who can occasionally break into a processor, resetting
its local clock to an arbitrary value. After a while, the adversary
may choose to leave that processor, and then we
would like this processor to recover its clock value.
We envision an adversary who can see (but not modify) all
the communication in the network, and can also break into
processors and leave them at wish. When breaking into a
In general adj p does not have to be a discrete variable, and
it could also depend on - . We don't use that generality in
this paper, though.
processor p, the adversary learns the current internal state
of that processor. Furthermore, from this point and until it
leaves p, the adversary may send messages for p, and may
also modify the internal state of p, including its adjustment
variable adj p . Once the adversary leaves a processor p, it has
no more access to p's internal state. We say that p is faulty
(or controlled by the adversary), if the adversary broke into
and did not leave it yet.
We assume reliable and authenticated communication between
processors p and q that are not faulty. More pre-
cisely, let - denote the message delivery bound. Then for
any processors p and q not faulty during [-], if p sends
a message to q at time - , then q receives exactly the same
message from p during [-]. Furthermore, if a non-faulty
processor q receives a message from processor p at
time - , then either p has sent exactly this message to q during
or else it was faulty at some time during this
interval. 3
The power of the adversary in this model is measured by
the number of processors that it can control within a time
interval of a certain length. This limitation is reasonable
because otherwise, even an adversary that can control only
one processor at a time, can corrupt all the clocks in the
system by moving fast enough from processor to processor.
(Limited Adversary). Let ' ? 0 and
ng be fixed. An adversary is f-limited (with
respect to ') if during any time interval [- +'], it controls
at most f processors.
We refer to ' as the time period and to f as the number of
faulty processors.
Notice that Definition 2 implies in particular that an f -
limited adversary who controls f processors and wants to
break into another one, must leave one of its current processors
at least ' time units before it can break into the new
one. In the rest of the paper we assume that n - 3f + 1.
2.3 Clock Synchronization Protocols
Intuitively, the purpose of a clock synchronization algorithm
is to ensure that processors' local clocks remain close to each
other and close to the real time, and that faulty processors
become synchronized again quickly after the adversary leave
them. It is clear, however, that no protocol can achieve
instantaneous recovery, and we must allow processors some
time to recover. Typically we want this recovery time to be
no more than ', so by the time the adversary breaks into the
new processors, the ones that it left are already recovered.
Definition 3 (Clock Synchronization). Consider a
clock synchronization protocol - that is executed in a net-work
with drift rate ae and message delivery bound -, and in
the presence of an f-limited adversary with respect to time
period '.
3 This formulation of "good links" does not completely rule
out replay of old messages. This does not pause a problem
for our application, however.
i. We say that - ensures synchronization with maximal
deviation ffi, if at any time - and for any two processors
not faulty during [- \Gamma '; - ], it holds that
ii. We say that - ensures accuracy with maximal drift
~
ae and maximal discontinuity -, if whenever p is not
faulty during an interval
holds that
and
3. ACLOCKSYNCHRONIZATIONPROTO-
COL
As in most (practical) clock synchronization protocols, the
most basic operation in our protocol is the estimation by a
processor of its peers' clocks. We therefore begin in Sub-section
3.1 by discussing the requirements from a clock estimation
procedure and describing a simple (known) procedure
for doing that. Then, in Subsection 3.2 we describe
the clock synchronization protocol itself. In this description
we abstract the clock estimation procedure, and view it a
"black box" that provides only the properties that were discussed
before. Finally, in Subsection 3.3 we elaborate on
some aspects of our protocol, and compare it with similar
synchronization protocols for other models.
3.1 Clock Estimation
Our protocol's basic building block is a subroutine in which
a processor p estimates the clock value of another processor
q. The (natural) requirements from such a procedures are
Accuracy. The value returned from this procedure
should not be too far from the actual clock value of
processor q.
Bounded error. Along with the estimated clock value,
also gets some upper bound on the error of that
estimation.
For technical reasons it is also more convenient to have this
procedure return the distance between the local clocks of p
and q, rather than the clock value of q itself. Hence we define
a clock estimation procedure as a two-party protocol, such
that when a processor p invokes this protocol, trying to estimate
the clock value of another processor q, the protocol returns
two values (dq ; aq ) (for distance and accuracy). These
values should be interpreted as "since the procedure was in-
voked, there was a point in which the difference Cq \Gamma Cp was
about dq , up to an error of aq ". Formally, we have
Definition 4. We say that a clock estimation routine
has reading error - and timeout MaxWait, if whenever a processor
p is non-faulty during time interval [- +MaxWait],
and it calls this routine at time - to estimate the clock of q,
then the routine returns at time - 0 -+MaxWait, with some
values (d; a). Moreover, if q was also non-faulty during the
interval [- 0 ], then the values (d; a) satisfy the following:
ffl a -, and
ffl There was a time - 00 2 [- 0 ] at which Cq (- 00 )\GammaC p (- 00
We now describe a simple clock estimation algorithm. The
requestor p sends a message to q, who returns a reply to
containing the time according on the clock of q (when
sending the reply). If p does not receive a reply within
is the message delivery bound),
aborts the estimation and sets
wise, if p sends its "ping" message to q at local time S, and
receives an answer C at local time R, it sets R\GammaSIntuitively, p estimates that at its local time R+S, q's time
was C. If the network is totally symmetric, that is the time
for the message to arrive was identical on the way from p to
q and on the way back from q to p, and p's clock progressed
between S and R at constant rate, then the estimation would
be totally accurate. In any case, if q returned an answer C,
then at some time between p local time S and p local time
R, q had the value C, so the estimation of the offset can't
miss by more than R\GammaS
.
This simple procedure can be "optimized" in several ways.
A common method, which is used in practice to decrease the
error in estimating the peer's clock (at the expense of worse
timeliness), is to repeatedly ping the other processor and
choose the estimation given from the ping with the least
round trip time. This is used, for example, in the NTP
protocol [21].
Also to reduce network load it may be possible to piggyback
clock querying messages on other messages, or to perform
them in a different thread which will spread them across a
time interval. Of course, if we implement the latter idea in
the mobile adversary setting, a clock synchronization protocol
should periodically check that this thread exists and
restart it otherwise (to protect against the adversary killing
that thread). We note that when implemented this way, we
cannot guarantee the conditions of Definition 4 anymore,
since the separate thread may return an old cached value
which was measured before the call to the clock estimation
procedure. (Hence, the analysis in this paper cannot be
applied "right out of the box" to the case where the time
estimation is done in a separate thread.)
3.2 The Protocol
Sync is our clock synchronization protocol. It uses a clock
estimation procedure such as the one described in Section
3.1, which we denote by estimateOffset, with the time-out
bound denoted by MaxWait and maximal error -. Other
parameters in this protocol are the (local) time SyncInt between
two executions of the synchronization protocol, and a
parameter WayOff , which is used by a processor to gauge the
distance between its clock and the clocks of the other proces-
sors. These parameters are (approximately) computed from
the network model parameters ae; - and '. The constraints
that these parameters should satisfy are:
SyncInt - 2MaxWait - 4-
is the maximum deviation we
want to achieve (and we have
These settings are further discussed in the analysis (Sec-
tion 4.2) and in Section 3.3.
The Sync protocol is described in Figure 1. The basic idea is
that each processor p uses estimateOffset to get an estimate
for the clocks of its peers. Then p eliminates the f smallest
and f largest values, and use the remaining values to adjust
its own clock. Roughly, p computes a "low value" C m which
is the f 1'st smallest estimate, and a ``high value'' C M
which is the f 1'st largest estimate. If p's own clock Cp is
more than WayOff away from the interval [C
knows that its clock is too far from the clocks of the "good
processors", so it ignores its own clock and resets it to (C m
C M )=2. Otherwise, p's clock is ``not too far'' from the other
processors, so we would like to limit the change to it. In
this case, instead of completely ignoring the old clock value,
resets its clock to (min(C
if p's clock was below C m or above C M , it will only move
half-way towards these values.)
The details of the Sync protocol are slightly different, though,
specifically in the way that the "low value" and "high value"
are chosen. Processor p first uses the error bounds to generate
overestimates and underestimates for these clock values,
and then computes the "low value" C m as the f +1'st smallest
overestimate, and the "high value" the f largest
underestimate. In the analysis we also assume that all the
clock estimations are done in parallel, and that the time that
it takes to make the local computations is negligible, so a
run of Sync takes at most MaxWait time on the local clock.
(This is not really crucial, but it saves the introduction of
an extra parameter in the analysis.)
3.3 Discussion
Our Sync protocol follows the general framework of "conver-
gence function synchronization algorithms" (see [26]), where
the next clock value of a processor p is computed from its
estimates for the clock values of other processors, using a
fixed, simple convergence function. 4
rounds. As mentioned in section 1.1, one notable differences
between our protocol and other protocols that have
been proposed in the literature is that many convergence
function protocols (for example [8, 9]) proceed in rounds,
where each processor keeps a different logical clock for each
round. round is the time between two consecutive synchronization
protocols.) In these protocols, if a processor is
asked for a "round-i" clock when this processor is already
in its i 1'st round, it would return the value of its clock
"as if it didn't do the last synchronization protocol''.
In contrast, in our Sync protocol a processor p always responds
with its current clock value. This makes the analysis
of the protocol a little more complicated, but it greatly
simplifies the implementation, especially in the mobile adversary
setting (since variables such as the current round
4 In the current algorithm and analysis, a processor needs to
estimate the clocks of all other processors; we expect that
this can be improved, so that a processor will only need to
estimate the clocks of its local neighbors.
Parameters: SyncInt // time between synchronizations
WayOff // bound for clocks which are very far from the rest
1. Every SyncInt time units call sync()
3. function sync () f
4. For each q 2 ng do
5.
7. d
8. m / the f 1'st smallest dq
9. M / the f 1'st largest d q
11. then adj p / adj
12. else adj p / adj
13. g
last round's clock, and the time to begin the next
round have to be recovered from a break in).
Known values. Another practical advantage of our protocol
is that it does not require to know the values of parameters
such as the message delivery bound - , the hardware
drift ae, the maximum deviation ffi, which may be hard to
measure in practice (in fact they may even change during
the course of the execution). We only use these values in
the analysis of the protocol. In practice, all the algorithm
parameters which do depend on these value (like MaxWait,
SyncInt and WayOff) may overestimate them by a multiplicative
factor without much harm (i.e. without introducing
such a factor to the maximum deviation, logical drift or
recovery time actually achieved).
When to perform Sync? In our protocol, a processor
executes the Sync protocol every SyncInt time units of local
time, and we do not make any assumptions about the
relative times of Sync executions in different processors. A
common way to implement this is to set up an alarm at the
end of each execution, and to start the new execution when
this alarm goes off. In the mobile adversary setting, one
must make sure that this alarm is recovered after a break-in
We note that our analysis does not depend on the processors
executing a Sync exactly every SyncInt time units. Rather,
all we need is that during a time interval of (1+ ae)(SyncInt+
MaxWait) real time, each processor completes at least one
and at most two Sync's.
4. ANALYSIS
Let T denote some value such that every non-faulty processor
completes at least one and at most two full Sync's
during any interval of length T . Specifically, setting
appropriate for this purpose
(where SyncInt is the time that is specified in the protocol,
MaxWait is a bound on the execution time of a single Sync,
and ae is the drift rate).
4.1 Main Theorem
The following main theorem characterizes the performance
achieved by our protocol:
Theorem 5. Let T be as defined above, let K
and assume that K - 5. Then
i. The Sync protocol fulfills the synchronization requirement
with maximum deviation
ii. The Sync protocol fulfills the accuracy requirement with
logical drift ~
and discontinuity
We note that the theorem shows a tradeoff between the rate
at which the Sync protocol is performed (as a function of
') and how optimal its performance is. That is, if we choose
T to be small compared to ' (for instance
is very small and so we get almost perfect accuracy (~ae - ae)
and the significant term in the maximum deviation bound
is 16-.
4.2 Clock Bias
For the purpose of analysis, it will be more convenient to
consider the bias of the clocks, rather than the clock values
themselves. The bias of processor p at time - is the difference
between its logical clock and the real time, and is
denoted by Bp (- ). Namely,
When the real-time - is implied in the context, we often omit
it from the notation and write just Bp instead of Bp(- ).
In the analysis below we view the protocol Sync as affecting
the biases of processors, rather than their clock values. In
particular, in an execution of Sync by processor p, we can
view dq as an estimate for Bp \Gamma Bq rather than an estimate
for and we can view the modification of adj p in the
last step as a modification of Bp . We can therefore re-write
Figure
2: Algorithm Sync for processor p: bias formulation
Parameters: SyncInt // time between sunchronizations
WayOff // Bound for clocks which are very far from the rest
1. Every SyncInt time units call sync()
3. function sync () f
4. For each q 2 ng do
5. (d
7.
8. B (m)
1'st smallest B q
9. B (M)
1'st largest B q
10. If
11. then Bp /
12. else Bp /
13. g
the protocol in terms of biases rather than clock values, as
in
Figure
2.
We note that by referencing Bp in the protocol, we mean
Bp(-) where - is the real time where this reference takes
place. We stress that the protocol cannot be implemented
as it is described in Figure 2, since a processor p does not
know its bias Bp . Rather, the above description is just an
alternative view of the "real protocol" that is described in
Figure
1.
4.3 Proof Overview of the Main Theorem
Below we provide only an informal overview of the proof.
A few more details (including a useful piece of syntax and
statements of the technical lemmas) can be found in Appendix
A. A complete proof will be included in the full
version of the paper. For simplicity, in this overview we
only look at the case with no drifts and no clock-reading
errors, namely (Note that in this case, we always
have so in Steps 6-7 of the protocol we get
The analysis looks at consecutive time intervals I0 ;
each of length T , and proceeds by induction over these inter-
vals. For each interval I i we prove "in spirit" the following
claims:
i. The bias values of the "good processors" get closer
together: If they were at distance ffi from each other
at the beginning of I i , they will be at distance 7ffi=8 at
the end of it.
ii. The bias values of the "recovering processors" gets
(much) closer to those of the good processors. If a recovering
processor was at distance - from the "range
of good processors" at the beginning of I i , it would be
at distance at most -=2 from that range at the end of
I i .
It therefore follows that after a few such intervals, the bias of
a "recovering processor" will be at most ffi away from those
of the "good processors".
To prove the above claims, our main technical lemma considers
a given interval I i , and assumes that there is a set G
of at least n \Gamma f processors, which are all non-faulty through-out
I i , and all have bias values in some small range at the
beginning of I i (w.l.o.g., this can be the range [\GammaD; D]).
Then, we prove the following three properties:
first show that the biases of the processors
in G remain in the range [\GammaD; D] throughout the interval
I i . This is so because in every execution of Sync, a
processor gets biases in that range from
all other processors in G. Since G contains more than
2f processors, then both B (m)
are in that
range, and so p's bias remains in that range also after
it completes the Sync protocol.
Also, if follows from the same argument that we always
have
so processor p never ignores its own current bias in
Step 11 of the protocol.
Next we consider processors whose initial
bias values are low (say, below the median for G).
executes at most two Sync's during the time
interval I i , and in each Sync it takes the average of its
own current bias and another bias below D, the bias
of p remains bounded strictly below D. Specifically,
one can show that the resulting bias values cannot be
larger than (Z (where Z is the initial median
value).
Similarly, for the processors q 2 G with high initial
bias values, the bias values remain bounded strictly
above \GammaD, specifically at least (Z \Gamma 3D)4.
Property 3 Last, we use the result of the previous steps to
show that at the end of the interval, the bias of every
processor in G is between (Z \Gamma 7D)=8 and (Z +7D)=8.
(Hence, in the case of no errors or drifts, the size of
the interval that includes all the processors in G shrunk
from 2D to 7D=4.)
To see this, recall that by the result of the previous
step, whenever a processor p 2 G executes a Sync, it
gets bias values which are bounded by (Z+3D)=4 from
all the processors with low initial biases - and so its low
estimate B (m)
must also be smaller than (Z
Similarly, it gets bias values which are bounded above
3D)=4 from all the processors with high initial
biases - and so its high estimate B (M)
p must be larger
than the bias of p after its Sync
protocol is computed as (min(B (m)
Bp))=2, and since Bp is in the range [\GammaD; D] by the
result of the first step, the result of this step follows.
Moreover, a similar argument can be applied even to
a processor outside G, whose initial bias is not in the
range [\GammaD; D]. Specifically, we can show that if at the
beginning of interval I i , a non-faulty processor p has
high bias, say D+ - for some - ? 0, then at the end of
the interval the bias of p is at most (Z +7D)=8
Hence, the distance between p and the "good range"
shrinks from - to -=2.
A formal analysis, including the effects of drifts and reading
errors, will be included in the full version of the paper.
5. FUTURE DIRECTIONS
Our results require that at most a third of the processors
are faulty during each period. Previous clock synchronization
protocols assuming authenticated channels (as we do)
were able to require only a majority of non-faulty processors
[19, 27]. It is interesting to close this gap. In [10] there is
another weaker requirement: only that subnetwork containing
non-faulty processors remain connected (but [10] also
assumes signatures). It may be possible to prove a variant
of this for our protocol, in particular it would be interesting
to show that it is sufficient that the non-faulty processors
form a sufficiently connected subgraph. If this holds, it will
also justify limiting the clock synchronization links to a limited
number of neighbors for each processor, which is one
of the practical advantages of convergence based clock synchronization
(It should be noted that (3f+1)-connectivity is not sufficient
for our protocol. One can construct a graph on 6f +2 nodes
which is (3f + 1)-connected, and yet our protocol does not
work for it. This graph consists of two cliques of 3f+1 nodes,
and in addition the i'th node of one clique is connected to
the i'th node of the other. Now, this graph is clearly 3f
connected, but our protocol cannot guarantee that the the
clocks in one cliques do not drift apart from those in the
other.)
Additional work will be required to explore the practical
potential of our protocol. In particular, practical protocols
such as the Network Time Protocol [21] involve many mechanisms
which may provide better results in typical cases, such
as feedback to estimate and compensate for clock drift. Such
improvements may be needed to our protocol (while making
sure to retain security!), as well as other refinements in the
protocol or analysis to provide better bounds and results in
typical scenarios.
The Synchronization and Accuracy requirements we defined
only talk about the behavior of the protocol when the adversary
is suitably limited. It may also be interesting to ask
what happens with stronger adversaries. Specifically, what
happens if the adversary was "too powerful" for a while, and
now it is back to being f-limited. An alternative way of asking
the same question is what happens when the adversary is
limited, but the initial clock values of the processors are ar-
bitrary. Along the lines of [11, 12], it is desirable to improve
the protocol and/or analysis to also guarantee self stabiliza-
tion, which means that the network eventually converges to
a state where the non-faulty processors are synchronized.
6.
--R
The
implicit rejection and average function for fault-tolerant physical clock synchronization
Maintaining Security in the Presence of Transient Faults
A proactive pseudo-random generator
Maintaining authenticated communication in the presence of break-ins
Probabilistic Clock Synchronization
Probabilistic Internal Clock Synchronization
An Optimal Internal Clock Synchronization Algorithm
Dynamic Fault-Tolerant Clock Synchronization
Lower bounds for convergence function based clock synchronization
Phase Clocks for Transient Fault Repair
Proactive public key and signature systems
Sharing, or: How to cope with perpetual leakage
Synchronizing clocks in the presence of faults
A new fault-tolerant algorithm for clock synchronization
the Network Time Protocol.
Kerberos: An Authentication Service for Computer Networks
How to withstand mobile virus attacks
A new solution to the byzantine generals problem
Chapter 7: DCE Time Service: Synchronizing Network Time
Understanding Protocols for Byzantine Clock Synchronization Technical Report TR87-859
--TR
Synchronizing clocks in the presence of faults
A new solution for the byzantine generals problem
Optimal clock synchronization
A new fault-tolerant algorithm for clock synchronization
How to withstand mobile virus attacks (extended abstract)
Understanding DCE
Wait-free clock synchronization
Dynamic fault-tolerant clock synchronization
Lower bounds for convergence function based clock synchronization
Maintaining authenticated communication in the presence of break-ins
Time-adaptive self stabilization
Proactive public key and signature systems
The proactive security toolkit and applications
Maintaining Security in the Presence of Transient Faults
Proactive Secret Sharing Or
Understanding Protocols for Byzantine Clock Synchronization
--CTR
Michael Backes , Christian Cachin , Reto Strobl, Proactive secure message transmission in asynchronous networks, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.223-232, July 13-16, 2003, Boston, Massachusetts
Kun Sun , Peng Ning , Cliff Wang, Fault-Tolerant Cluster-Wise Clock Synchronization for Wireless Sensor Networks, IEEE Transactions on Dependable and Secure Computing, v.2 n.3, p.177-189, July 2005
Hermann Kopetz , Astrit Ademaj , Alexander Hanzlik, Combination of clock-state and clock-rate correction in fault-tolerant distributed systems, Real-Time Systems, v.33 n.1-3, p.139-173, July 2006
Kun Sun , Peng Ning , Cliff Wang, TinySeRSync: secure and resilient time synchronization in wireless sensor networks, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA | clock synchronization;proactive systems;mobile adversary |
343562 | A New Convergence Proof for Finite Volume Schemes Using the Kinetic Formulation of Conservation Laws. | We give a new convergence proof for finite volume schemes approximating scalar conservation laws. The main ingredients of the proof are the kinetic formulation of scalar conservation laws, a discrete entropy inequality, and the velocity averaging technique. | Introduction
. We consider the Cauchy problem for nonlinear hyperbolic
scalar conservation laws in several space dimensions.
#t
on the slab # := [0, T
for compactly supported initial data
(R d-1 ). We assume the
flux function f in C 1,1
loc (R) and As is well known, solutions of nonlinear
conservation laws may become discontinuous in finite time, so weak solutions must
be considered, i.e., functions
# L #) such that
#t
for all # D(#), where
x). As usual, we require an entropy condition (cf. Lax
[La'73]). For any entropy U # C 2 (R) we define the entropy flux
ds.
Then the entropy condition reads as follows: For all convex U and # D(# 0,
#t
A function
# L #) such that (1.2) and (1.4) hold for all convex entropies U
will be called a weak entropy solution of the Cauchy problem (1.1).
We are concerned with the convergence of approximations of
u by finite volume
schemes. This question has a history going back to the 1950s. Let us point out two
modern developments: The first is Kuznetsov's [Kz'76] approximation theory, which
was generalized by Vila [Vi'94] to first-order finite volume methods on unstructured
# Received by the editors September 30, 1997; accepted for publication (in revised form) February
26, 1999; published electronically February 1, 2000. This work was supported by Deutsche
Forschungsgemeinschaft, SFB 256 at Bonn University.
http://www.siam.org/journals/sinum/37-3/32806.html
Institut fur Angewandte Mathematik, Wegelerstrasse 10, 53115 Bonn, Germany (mwest@
iam.uni-bonn.de, noelle@iam.uni-bonn.de).
CONVERGENCE OF FINITE VOLUME SCHEMES 743
grids and by Cockburn, Coquel, and LeFloch [CCL'94] to higher-order schemes. Further
generalizations can be found in Cockburn and Gremaud [CG'96] and Noelle
[No'96]. The second approach is based on a uniqueness result for measure-valued
solutions due to DiPerna [Di'85], which was first applied to the analysis of numerical
schemes by Szepessy [Sz'89] and Coquel and LeFloch [CL'91, CL'93]. Cockburn,
Coquel, and LeFloch [CCL'95] and Kroner and Rokyta [KR'94] applied this theory
to first-order finite volume schemes, and Kroner, Noelle, and Rokyta [KNR'95] to
higher-order schemes. Noelle [No'95] extended these results to irregular grids, where
cells may become flat as h # 0, and to general E-fluxes, which include Godunov's
flux. Both Kuznetsov's and DiPerna's approaches rely on Kruzkov's existence and
uniqueness result [Kr'70].
In this paper, we give a convergence proof for finite volume schemes which does not
rely on [Kr'70]. Instead, our approach is built upon the recent kinetic formulation for
scalar conservation laws and the velocity averaging technique. As in all convergence
proofs, a discrete entropy inequality plays a crucial role.
The kinetic formulation was introduced by Lions, Perthame, and Tadmor [LPT'94].
They show that there is a one-to-one correspondence between the entropy solutions
of a scalar conservation law and solutions of a linear transport equation for which
a certain nonlinear constraint holds true. More precisely, one considers functions
depending on space-time and an additional v # R that solve the following equation:
#R
#t
R
for all # D(# R). Here
m is a bounded nonnegative measure defined on #R
and # 0 are the initial data. This equation is supplemented with an assumption on the
structure of #. If the function # is defined by
for all
should have the form
for some scalar function
u defined on #. (An analogous statement should hold for the
initial data.) Then we have the following equivalence (shown in [LPT'94]).
Theorem 1.1.
(i) Let
u be a weak entropy solution of problem (1.1). Then there is a bounded
nonnegative measure
m such that
solves the transport equation (1.5) for appropriate initial data. The measure
m is supported in # [-
. Furthermore, we
have
744 MICHAEL WESTDICKENBERG AND SEBASTIAN NOELLE
Conversely, let
and a bounded nonnegative
measure
m be given, that solve the transport problem (1.5). Assume that
# can
be written as in (1.7) for some function u. Then
u is a weak entropy solution
of the Cauchy problem (1.1).
Note that by definition
for su#ciently smooth #.
The second important ingredient of our proof is a discrete entropy inequality (cf.
Theorem 2.6 below). Here, we estimate the rate of entropy dissipation over each cell
in terms of the local oscillation of the numerical flux function. We refer to [KNR'95]
and [No'95]. It turns out that this result fits very neatly into the kinetic formulation
stated above.
Finally, our analysis relies on so-called velocity averaging lemmas first introduced
by Golse, Lions, Perthame, and Sentis [GLPS'88] and further developed by DiPerna,
Lions, and Meyer [DLM'91] and others. We refer to the survey article of Bouchut
[Bo'98] for more references and recent results.
The velocity averaging technique allows us to prove the strong compactness of a
sequence of approximate solutions u h of problems (1.2)-(1.4). The principal idea is
that the macroscopic quantity u has more regularity than
# whose v-average it is.
The following result is a variant of Theorem B in [LPT'94].
Theorem 1.2. Let 1 < p # 2 and 0 < # < 1. Choose some test function
# D(R) and define # := spt . Assume there are sequences (# h ), (m h ), and
uniformly bounded in L p (R d
-# (R d )), respectively,
such that
#t
(R d
R).
If the following nondegeneracy condition now holds:
sup (#R d meas #
then the sequence z h := # R
belongs to a compact subset of L 1
loc (R d ).
Remark 1.3. Here stands for the space of strongly measurable, integrable
functions on # taking values in X, where X is some Banach space (cf. [DU'77]),
M(R d ) is the space of bounded Radon measures, and B 1,1
-# (R d ) is a Besov space
below). The assumptions on (# h ), (m h ), and are precisely
adapted to the estimates which we will derive in section 3.
Note that the nondegeneracy condition (1.10) (which we will assume throughout)
restricts the class of admissible flux functions: f should be nonlinear. Theorem 1.2 is
another instance of the fact that the nonlinearity of a problem can have a regularizing
e#ect on the solutions. Think of the transport operator as a directional
derivative along the vector (1, f # ). Then the partial regularity information contained
in (1.9) is transformed into compactness of the moments of # h , that is, of z h , as long
as a condition on the distribution of the directions (1, f # ) holds true. This is the heart
of the matter.
Condition (1.10) or some variant appears in many papers dealing with averaging
lemmas (see [Ger'90, LPT'94, Li'95, Bo'98] and the references therein). In our context,
CONVERGENCE OF FINITE VOLUME SCHEMES 745
it can be seen as a generalization of an assumption formulated by Tartar [Tar'83] in
his existence proof for scalar conservation laws in one spatial dimension.
The outline of the paper is as follows. In the next section we define a class of finite
volume schemes for the scalar conservation law (1.1) and state the main convergence
theorem. This theorem is proved in section 3. In the last section we outline the proof
of the velocity averaging result, Theorem 1.2.
2. A class of finite volume schemes. Let I be a countable index set and (T i ),
a family of closed convex polygons T i # R d-1 . We assume that the T i cover the
whole space, and that the intersection of two di#erent polygons consists of common
faces and vertices only. Define the mesh parameter h as sup i diam T i . Let (S ij ) be
the faces of be the corresponding outer unit normal vectors, and J i be their
number. Then we have
By definition, for every T i there is exactly one T k with T We denote
that polygon by T ij . Next choose
. Now the family of space-time prisms T n
I gives an unstructured mesh on #. We write S n
for faces normal to the time direction, while faces in spatial directions are called
ij .
Finally, we denote the polygon neighboring T n
i at the face S n
ij by T n
ij .
The finite volume approximation u h of the entropy solution u will be piecewise
constant on the cells of an unstructured mesh with mesh parameter h. To keep the
notation simpler, we omit the index h in what follows. We write u(x) =: u n
almost all
ij ). The update formula is given by
for approximate fluxes g n
ij to be defined in a moment. The numbers
are taken as numerical initial data. It is well known that in this case the sequence of
approximate initial data converges strongly in L 1
loc (R d-1 ) to
The class of approximate fluxes, to which the convergence result given below
applies, is the class of so-called E-fluxes as introduced by Osher [Os'84]. An E-flux is
a family (g ij
-# R such that
(i) (consistency). For all v # R
(ii) (conservativity). If
(iii) (Osher's condition). For
One example of an E-flux is Godunov's flux
min
another one is the Lax-Friedrichs flux. Every E-flux can be obtained from these two as
a convex combination (cf. Tadmor [Tad'84]). We will restrict ourselves to Godunov's
flux in all that follows. Godunov's flux can be rewritten
is a family of piecewise continuous functions
-# R such that
R. Note that
Lipschitz-continuous and monotone, i.e.,
nondecreasing in the first and nonincreasing in the second argument.
In case of a first-order scheme, the approximate flux is now given by
It is also possible to consider higher-order schemes, but we will not do this here. The
approximate entropy flux corresponding to the entropy U is defined as
for all v 1 , v 2 # R. Obviously, G ij is consistent and conservative, too. Moreover, we
have the compatibility relation
Here # k stands for the partial derivative with respect to the kth argument, 2.
We use the notation
Update formula (2.1) can be recast in a somewhat di#erent form. We assume that
we are given numbers #x n
0 such that
Then we define
ij and # n
Now we can write
ij
ij , with u n+1
We refer to [No'95] for the optimal choice of the numbers # n
ij .
Theorem 2.1. Let (u h ) be a sequence of approximate solutions of (1.1) built
from the finite volume scheme described above. Assume that
CONVERGENCE OF FINITE VOLUME SCHEMES 747
(i) The sequence (u h ) is uniformly bounded for su#ciently small h
and has uniformly compact support in #.
(ii) There exists s # 1
such that for #t := inf n #t n
lim
(iii) There exists a constant # > 0 such that for #h/ #t and all i, j, n
(iv) The nondegeneracy condition (1.10) holds.
Then a subsequence of (u h ) converges strongly in L 1
loc (#) to a weak entropy solution
of the Cauchy problem (1.1).
Remark 2.2. We first remark that convergence can be shown not only for Go-
dunov's scheme but for the whole class of E-schemes. It is also possible to treat
higher-order schemes (see [NW'97]). Higher-order means that on each cell a polynomial
reconstruction of the solution is built using the values u n
i at a given time level.
In order to avoid oscillations near discontinuities, the reconstructions are stabilized
using h-dependent limiters. Then the values of these reconstructions at fixed quadrature
points on the cell faces are used in definition (2.4) of the approximate fluxes.
This approach goes back to the monotone upstream-centered scheme for conservation
laws (MUSCL) schemes of van Leer [VL'77]. It is shown in [CHS'90, KNR'95, Gei'93]
that such schemes may indeed be higher-order accurate in space.
Remark 2.3. Uniform boundedness (for spatially higher-order schemes) was shown
in Cockburn, Hou, and Shu [CHS'90] and Geiben [Gei'93]. We will not reprove this
here. Since we assume compactly supported initial data, u h will live on a bounded
set for all schemes with a finite speed of propagation, e.g., for standard finite volume
schemes with #t # Ch for some constant C not depending on h.
Remark 2.4. Theorem 2.1 contains no explicit assumption on the regularity of the
triangulation. Combining conditions (2.10) and (2.11) one obtains a mild restriction
on the geometry of the cells, which allows them to become flat in the limit as h tends
to zero. For a detailed analysis, see [No'95, No'96].
Remark 2.5. The convergence result stated in Theorem 2.1 is not new. A similar,
somewhat more general theorem was shown in [No'95] using DiPerna's theory of measure-valued
solution. Compare also [Sz'89, CL'91, CL'93, Vi'94, CCL'94, CCL'95,
KR'94, KNR'95, No'96] for related results. What is new and presented here is the
proof given below, using the kinetic formulation of conservation laws. Note that the
nonlinearity of the flux, in the form of assumption (1.10), is required.
In the proof of Theorem 2.1 the following discrete entropy inequality, which holds
for Godunov's flux as well as for other E-schemes, plays a prominent role in the
following theorem.
Theorem 2.6. For all convex entropies U # C 2 (R) and all i, j, n
Here # := min-M#v#M U # (v).
This entropy inequality was derived in [No'95]. Analogous estimates can be found
in [KR'94] for the Lax-Friedrichs and Engquist-Osher schemes and in [KNR'95, No'95]
for (spatially) higher-order schemes.
748 MICHAEL WESTDICKENBERG AND SEBASTIAN NOELLE
3. Proof of Theorem 2.1 (convergence). The proof consists of two steps.
First we construct an approximate distribution function # h from the numerical solution
u h and apply the transport operator to it. We split the resulting term into three
parts and give bounds for them in various norms. In the second step we use the velocity
averaging result (1.2) to show strong compactness of the approximate solution
u h and complete the proof.
3.1. Some estimates. Let us start with the definition of the distribution func-
tion. To simplify the notation we omit the index h most of the time. Extending u by
zero to R d we have
for almost all From the Gauss-Green theorem we get
#t
where
#(|0, u N
Here dS n
i is the (d-1)-dimensional Hausdor# measure restricted to S n
dH d-1
(same for S n
Note that in our notation the contribution from some cell face S n
is counted twice: S n
ij is the jth face of the cell T n
but also the lth face of some
neighboring cell T n
. We compensated that by the factor one half in (3.3). Now we
split R into three parts:
#(|0, u N
are defined in (2.8) and
To prove the identity R only have to check that2 # i#I
ij .
CONVERGENCE OF FINITE VOLUME SCHEMES 749
This follows easily from the properties of # and For fixed ij, let kl be the unique
index pair defined by S n
kl and i #= k. Then u
kl , u n
k , and
2 and definition (1.6)) we have
Using almost everywhere (a.e.) for all
we arrive at
This proves our claim. Let us take a closer look at the three parts of R. We
have R 0 because we extended u from # to R d . Note that the first summand in (3.4)
contains the numerical initial data. The second term R 1 is a measure for the entropy
production in the scalar conservation law. It corresponds to the right-hand side (RHS)
of (1.5). Finally R 2 is the residual. It measures the numerical error. In the following,
we will write # := [-M,M ].
Lemma 3.1. The R h
are uniformly bounded in L 1 (#, M(R d )).
Proof. Measurability follows from the tensor product structure of R h
and the boundedness is immediate from our assumptions on u h , e.g.,
Lemma 3.2. R h
1 can be written as
R h
in D # (R d
for some nonnegative uniformly bounded measure m h .
Proof. We suppress the mesh index h. Clearly, to obtain (3.5) we may simply
integrate R 1 in the kinetic variable. Using overbars to indicate primitives, as in
-#
we arrive at the representation
ij .
Note that R 1 vanishes outside the interval [-M,M ]. Therefore, m n+1
we have (using (1.8) and (2.3)-(2.4))
ds
which vanishes again because of (2.9). Note that # J i
1. We conclude that
m is compactly supported in R d
[-M,M ]. Now let us fix i, n for a moment. We
choose a test function U # C 2 (R) which is convex on [-M,M ] (a convex entropy)
and apply its second derivative to m n
. Integrating by parts and using compatibility
relation (1.3) (and (1.8) again) we find
(Remember that m n+1
i has compact support.) This quantity can be controlled using
the discrete entropy inequality in Theorem 2.6. In fact, from representation (2.9) and
Jensen's inequality we obtain
So, if we choose a sequence of convex entropies U k with
a.e.,
we find from the dominated convergence theorem
(v) dv
Since this holds for all i, n we conclude that m is a nonnegative measure as claimed.
To show the boundedness of m, we use (3.6) with U(v) := 1v 2 to obtain
But for all index pairs such that
|S kl | G n
kl
because the approximate entropy flux is conservative. Hence
CONVERGENCE OF FINITE VOLUME SCHEMES 751
Furthermore, we have
Therefore, the j-sum in (3.7) drops out if we sum over all cells. The remaining
)-terms, however, appear twice with alternating signs and therefore cancel out,
too, except for those with . Since the entropy U is nonnegative we
finally arrive at
(We used (2.2) and Jensen's inequality.) The lemma is proved.
Definition 3.3. Let # 0 # D(R d ) be a nonnegative radially symmetric test function
which equals 1 on the ball B(0, 1) and vanishes outside B(0, 2). Define
for 2. Introduce the dyadic operators
#}. Then the Besov
space B p,q
s (R d ) with s # R and 1 # p, q # contains all tempered distributions on
R d such that the norm
s (R d
(R d )
(modified if more details, see Triebel [Tr'83]).
Lemma 3.4. Let
lim
-# (R d
Remark 3.5. Note that the Besov space B 1,1
-# (R d ) can be identified with the
topological dual of the closure of D(R d ) in C # (R d ) (the space of Holder continuous
Proof. Again we suppress the index h. First we show that for all i, j, n
-# (R d
We apply a test function # D(R d ) and obtain by definition of # n
with # n+1
ij the averages of # over the cell faces S n+1
i and S n
ij . Then
(R d ) ,
where
ij is the evaluation of # in the center of mass of S n
ij .
Next, we must control the L 1 -norm of # n
ij . For an arbitrary U # C 1 (R) we have
The first identity follows as above from the compatibility relation (1.3) and (1.8)
(consult also (2.5) and (2.7)). For the second we used the consistency and Lipschitz-
continuity of the approximate entropy flux G ij . To proceed we now replace the derivative
of G ij by (2.6). Since Godunov's flux is nonincreasing in the second argument,
the derivative of g ij has a sign and we can estimate
#
using the consistency of g ij and (2.4). Note that we do not assume convexity for U .
Since the measurability of # n
ij is obvious we learn that for all indices i, j, n
Now the norm of R 2 can be bounded by
-# (R d
and further, using the Cauchy-Schwarz inequality, by
i is the characteristic function of the set of indices i, n for which u n
i is nonva-
nishing. Note that by assumption, the support of the numerical solution is uniformly
bounded. These terms can be handled easily: First we have
#t
Moreover, from Theorem 2.6 with U(v) := 1v 2 we find
CONVERGENCE OF FINITE VOLUME SCHEMES 753
By definition,
Therefore (cf. (2.9))
ij
which is Jensen's inequality. We proceed as in the proof of Lemma 3.2 (cf. (3.7)) and
arrive at
-# (R d
#t#
(R d-1 ) .
Note that 1/# explodes as h # 0. But
#t#
finally we obtain
-# (R d
#t
Using assumption (2.10) we are finished.
Remark 3.6. We stop here for a moment to summarize what we have shown so
far. Since (# h ) is uniformly bounded in L # a subsequence converges weak* to some
function
#. Associated with there is a sequence (R h ) as defined above. Given
# D(#) and U # D(R) we have
#t
R h ,
The first term on the RHS goes to
For the second, we have shown in Lemma 3.2 that (m h ) is uniformly bounded and
nonnegative in the sense of measures. Extracting another subsequence, if necessary, we
have
The third term finally goes to zero in distributional
sense (even in a somewhat stronger topology) as shown in Lemma 3.4. Therefore the
m) solves the transport equation (1.5).
So far, the strategy of proof is similar to that of the first statement of Theorem
1.1 (see [LPT'94]), where it is shown that an entropy solution u leads to a nonnegative
bounded measure
m such that
# defined by (1.7) satisfies the transport equation (1.5).
What remains to be done is to prove that the nonlinear constraint (1.7) holds for the
limit of the sequence (# h ). In this case, the limit defines a function u which, according
to the second part of Theorem 1.1, is a weak entropy solution. For this we use the
velocity averaging technique and show that (some subsequence of)
strongly in L 1
loc .
3.2. End of proof. To apply Theorem 1.2, we choose a test function # which
equals 1 on the interval [-M,M ]. Then we have z using (1.8) and (3.1). Define
. Because of Lemmas 3.1 and 3.4, and since the space of measures
is continuously embedded into
-# (R d ) (cf. [Tr'83]), # h satisfies the assumptions of
Theorem 1.2. Moreover,
(R d #u h
which is uniformly bounded, too. But then Theorem 1.2 shows that u h belongs to a
compact subset of L 1
loc (R d ). Since
|#
- u h2
|,
the approximate distribution function # h converges strongly in L 1
loc
(R d
R) (up to
a subsequence). Hence, the nonlinear constraint (1.7) holds for the limit
#. From
Theorem 1.1 we conclude that u is a weak entropy solution.
Remark 3.7. One classical approach to proving strong compactness for sequences
of approximate solutions consists in establishing a uniform bound on the total variation
and then making use of Helly's theorem. For the more modern approach relying
on measure-valued solutions, as introduced by DiPerna, no such control is neces-
sary. Once one has shown consistency with the entropy condition, the L 1 -contraction
ensures compactness. The result presented in this paper lies somewhere in between
these two cases. In fact, we do need some control over the residual, but this bound is
comparatively easy to obtain, since we can choose a very weak topology.
4. Proof of Theorem 1.2 (velocity averaging). For the sake of completeness,
we would like to give an outline of proof for Theorem 1.2. We will skip most details
since the arguments are technically involved and can be found in other papers on
velocity averaging.
Let us fix some test function # D(R) and denote the RHS of (1.9) by R h . Then
we can recover # h from R h (formally) by inverting the transport operator
#)
# for all v # R, #,
(the Fourier transform is taken with respect to space-time only). But now we face
the problem that the symbol -i(#
#) becomes unbounded. We will need
a splitting. Let # D(R) be a nonnegative even test function, vanishing outside the
interval [-2, 2], with we define two operators:
|#|
#, v) (v) dv #
for some parameter # (0, #), and
|#|
#, v)
#)
(v) dv # .
Note that the inverse symbol
#) -1 appears in (4.1), but because of the
cut-o# function # it is e#ective only in the region
#,
CONVERGENCE OF FINITE VOLUME SCHEMES 755
i.e., outside a neighborhood around the singular set. Therefore, it is reasonable to
expect that B # has nice properties. Let # := spt #. Then we have the following
lemma.
Lemma 4.1. There exists a constant C not depending on # (0, #) such that
(R d
(R d
#)
for all # L p (R d
1. The function # is given by
#) := sup #,
# .
Remark 4.2. We assumed that the nondegeneracy condition (1.10) holds. It is
easy to show that in that case # 0 as # 0 [Ger'90]. As a consequence, the
function A #) for suitable # becomes small in L p -norm if we let # go to zero.
Definition 4.3. The generalized (fractional) Sobolev space H p
s (R d ) is defined
as the space of all tempered distributions such that the norm
s (R d ) := #(Id - #) s/2 # L p (R d )
stays finite. For more details consult [Tr'83].
Lemma 4.4. Let 1 < p # 2, # (0, #). Then we have for all # L p (R d
#)
(R d (R d # C # L 1 (#,L p (R d )) .
C # grows as # 0. The same estimate holds for the operator
These two lemmas are shown as in [DLM'91] (compare (4.2) and (4.3) with the estimates
(22) and (23) in that paper) with the modifications explained in the appendix
of [LPT'94]. Note that the operators B # , B #
are smoothing. We gain one derivative.
We will now prove Theorem 1.2 from these two results. First we note that the
dyadic operators S j (consult Definition 3.3) commute both with B # , B # and with
(Id - #) 1/2 . It is then a simple application of Minkowski's inequality to rewrite (4.3)
using Besov norms:
s (R d
s (R d
for all 1 < p # 2, 1 # q #, and s # R. But the operator (Id - # x ) 1/2 defines an
isomorphism (a lifting) between Besov spaces of di#erent regularity [Tr'83, 2.3.8]:
s (R d
1+s (R d ) .
So we conclude that B # maps
s (R d
1+s (R d ). The same holds for
# . Now for any # (0, #) we have the splitting
z
Denote by z #,h
0 the first term on the RHS of (4.6) and by z #,h
1 the terms in brackets.
As already pointed out in Remark 4.2
z #,h
0 can be made arbitrarily small in L 1
loc (R d ) uniformly
with respect to h by choosing # small enough.
Moreover, we have
z #,h
1 is strongly compact in L 1
loc (R d ) for all #.
To see this, we choose a p > 1, p near 1 such that the number #
than 1 (which is always possible since # < 1), and use the continuous embedding
-# (R d
-# (R d )
(cf. [Tr'83, 2.3.2 and 2.7.1]) to show that
are uniformly bounded in
-# (R d )).
We conclude from (4.4) and (4.5) that z #,h
1 is uniformly bounded in some Besov space
with strictly positive regularity and therefore relatively compact in L 1
loc (R d ). But
then the same is true for the sequence (z h ). This proves our claim.
--R
"Equation Cinetiques,"
An error estimate for finite Volume
Convergence of the finite Volume
estimates for finite element methods for scalar conservation laws
The Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws IV: The multidimensional case
Convergence of finite di
Convergence of finite di
Measure valued solutions to conservation laws
regularity of velocity averages
Convergence of MUSCL-Type Upwind Finite Volume <Volume>Schemes</Volume> on Unstructured Triangular Grids
Regularity of the moments of the solution of a transport equation
Convergence of upwind finite Volume
Convergence of higher order upwind finite Volume
First order quasilinear equations with several independent variables
Accuracy of some approximate methods for computing the weak solutions of a first-order quasi-linear equation
Hyperbolic systems of conservation laws and the mathematical theory of shock waves
Kinetic formulation of multidimensional scalar conservation laws and related equations
Convergence of higher order finite Volume
A note on entropy inequalities and error estimates for higher order accurate finite Volume
A New Convergence Proof for Finite Volume
the entropy condition
Convergence of a shock capturing streamline di
Numerical viscosity and the entropy condition for conservative di
The compensated compactness method applied to systems of conservation laws
Theory of Function Spaces
Convergence and error estimates in finite Volume
--TR | conservation laws;finite volume scheme;velocity averaging;kinetic formulation;convergence;entropy solutions;discrete entropy inequality |
343587 | Multigrid for the Mortar Finite Element Method. | A multigrid technique for uniformly preconditioning linear systems arising from a mortar finite element discretization of second order elliptic boundary value problems is described and analyzed. These problems are posed on domains partitioned into subdomains, each of which is independently triangulated in a multilevel fashion. The multilevel mortar finite element spaces based on such triangulations (which need not align across subdomain interfaces) are in general not nested. Suitable grid transfer operators and smoothers are developed which lead to a variable V-cycle preconditioner resulting in a uniformly preconditioned algebraic system. Computational results illustrating the theory are also presented. | Introduction
The mortar finite element method is a non-conforming domain decomposition
technique tailored to handle problems posed on domains that are partitioned into
independently triangulated subdomains. The meshes on different subdomains need
not align across subdomain interfaces. The flexibility this technique offers by allowing
sub-structures of a complicated domain to be meshed independently of each
other is well recognized. In this paper we consider preconditioned iteration for the
solution of the resulting algebraic system. Our preconditioner is a non-variational
multigrid procedure.
The mortar finite element discretization is a discontinuous Galerkin approxima-
tion. The functions in the approximation subspaces have jumps across subdomain
interfaces and are standard finite element functions when restricted to the sub-
domains. The jumps across subdomain interfaces are constrained by conditions
associated with one of the two neighboring meshes. Bernardi, Maday and Patera
(see [2, 3]) proved the coercivity of the associated bilinear form on the mortar finite
element space, thus implying existence and uniqueness of solutions to the discrete
problem. They also showed that the mortar finite element method is as accurate
as the usual finite element method. Recently, stability and convergence estimates
for an hp version of the mortar finite element method were proved [16].
When each subdomain has a multilevel mesh, preconditioners for the linear system
arising from the mortar discretization can be developed by multilevel tech-
niques. A hierarchical preconditioner with conditioning which grows like the square
1991 Mathematics Subject Classification. 65F10, 65N55, 65N30.
Key words and phrases. mortar, finite element method, multigrid, V-cycle, preconditioning,
domain decomposition.
This work was supported by the National Science Foundation under grant DMS 9626567, the
Environmental Protection Agency under grant R 825207 and the State of Texas under ARP/ATP
grant 010366-168.
c
fl0000 American Mathematical Society
of the number of levels is described in [8]. In this paper, we show that a variable
V-cycle may be used to develop a preconditioned system whose condition number
remains bounded independently of the number of levels.
One of the difficulties in constructing a multigrid preconditioner for the mortar
finite element method arises due to the fact that the multilevel mortar finite
element spaces are, in general, not nested. Multigrid theory for nonnested spaces
[5] may be employed to construct a variable V-cycle preconditioner, provided a
suitable prolongation operator can be designed. We construct such a prolongation
operator and prove that it satisfies the "regularity and approximation" property
(Condition (C.2)) required for application of the multigrid theory.
The next difficulty is in the design of a smoother. Our smoother is based on the
point Jacobi method. Its analysis is nonstandard since the constraints at subdomain
interface gives rise to mortar basis functions with non-local support. We prove that
these basis functions decay exponentially away from their nodal vertex. This leads
to a strengthened Cauchy-Schwarz inequality which is used to verify the smoothing
hypothesis (Condition (C.1)).
The remainder of the paper is organized as follows. Section 2 introduces most of
the notation in the paper. Section 3 describes the multilevel mortar finite element
spaces. In Section 4 the variable V-cycle multigrid algorithm is given and the
main result (Theorem 4.1) is stated and proved. Section 5 provides proofs of some
technical lemmas. Implementation issues are considered in Section 6 while the
results of numerical experiments illustrating the theory are given in Section 7.
2. Preliminaries
In this section, we provide some preliminaries and notation which will be used in
the remainder of the paper. In addition, we describe the continuous problem and
impose an assumption on the regularity of its solution.
Let\Omega be an open subset of the plane. For non-negative integers s; the Sobolev
space H s
(see [7, 11]) is the set of functions in L
with distributional derivatives
up to order s also in L
If s is a positive real number between non-negative
integers
s(\Omega\Gamma is the space obtained by interpolation (by the real
method [13]) between H
The Sobolev norm on H
s(\Omega\Gamma is denoted
by k\Deltak
s;\Omega and the corresponding Sobolev seminorm is denoted by j\Deltaj
and a segment fl contained
in\Omega ; the trace of OE on fl is denoted by OEj
We will often write kOEk r;fl and jOEj r;fl for the H r (fl) norm and seminorm respectively,
of the trace OEj
Assume
that\Omega is connected and that its boundary, @ is polygonal. Let
@\Omega be
split into
@\Omega D and
@\Omega N such that
[@\Omega D and
@\Omega D is empty and
assume that
@\Omega D has nonzero measure. Denote by V the subspace of the Sobolev
space H 1
(\Omega\Gamma consisting of functions in H
whose trace on
@\Omega D is zero. Denote
by V 0 the dual of the normed linear space V : The dual norm k\Deltak
\Gamma1;\Omega is defined by
kuk
1;\Omega
denotes the duality pairing. Note that L
2(\Omega\Gamma is contained in V 0 if we
identify the functional ! v; OE ?= (v; OE), for all v 2 L
2(\Omega\Gamma6 Here (\Delta; \Delta) denotes the
inner product in L
s;\Omega is the norm on the space defined
by interpolation between V 0 and L
We seek an approximate solution to the problem
where A(\Delta; \Delta) is bilinear form on V \Theta V defined by
Z
\Omega
and F is a given continuous linear functional on H
This problem has a unique
solution. For the mortar finite element method, we restrict our attention to F of
the form
Z
\Omega
This is the variational form of the boundary value problem
@U
Although our results are stated for this model problem, extension to more general
second order elliptic partial differential equations with more general boundary
conditions are straightforward.
We will need to assume some regularity for solutions of Problem (2.1). We
formalize it here into Assumption (A.1).
There exists a fi in the interval (1=2; 1] for which
\Gamma1+fi;\Omega holds for solutions U to the problem (2.1).
This is known to hold for wide class of domains [11, 12]. Note that we do not
require full elliptic regularity
3. The Mortar Finite Element Method
In this section, we first provide notation for sub-domains and triangulations.
Next multilevel mortar finite element spaces are introduced and the mortar finite
element problem is defined.
Partition\Omega into non-overlapping polygonal
K: The
n@\Omega is broken into a set of disjoint open straight line segments
k each of which is contained in
@\Omega j for some i and j: The collection of these
edges will be denoted by Z; i.e.,
Each\Omega i is triangulated to produce a quasi-uniform mesh T i
1 of size h 1 . The
triangulations generally do not align at the subdomain interfaces. We assume that
the endpoints of each interface segment in Z are vertices of T p
and q are such that fl ae
Denote the global mesh [
To set
up the multigrid algorithm, we need a sequence of refinements of We refine the
triangulation T 1 to produce T 2 by splitting each triangle of T 1 into four triangles
by joining the mid-points of the edges of the triangle. The triangulation T 2 is
then quasi-uniform of size h Repeating this process, we get a sequence of
triangulations each quasi-uniform of size h
We next define the mortar finite element spaces following [1, 2, 3, 16] (our notation
is close to that in [16]). First, we define spaces e
M k by
e
@\Omega D g
and
f
is linear on each triangle of T k g:
Throughout this paper we will use piecewise linear finite element spaces for convenience
of notation. The results extend to higher order finite elements without
difficulty [10].
For every straight line segment fl 2 Z; there is an i and j such that fl '
Assign one of i and j to be the mortar index, M(fl); and the other then is the non-
mortar index, NM(fl):
Let\Omega M(fl) denote the mortar domain of fl
and\Omega NM(fl) be
the non-mortar domain of fl: For every u 2 e
and u NM
fl to be the trace
of
uj\Omega M(fl) on fl and the trace of
uj\Omega NM(fl) on fl respectively.
We now define two discrete spaces S k (fl) and W k (fl) on an interface segment
fl: Every fl 2 Z can be divided into sub-intervals in two ways: by the vertices of
the mesh in the mortar domain of fl and by those of the non-mortar domain of fl:
Consider fl as partitioned into sub-intervals by the vertices of the triangulation on
non-mortar side. Let these vertices be denoted by x i
Denote the sub-intervals
[x
are the sub-intervals
that are at the ends of fl: The discrete space S k (fl) is defined as follows.
v is linear on each ! k;i
v is constant on ! k;1 and on ! k;N ;
and v is continuous on fl:
We also define the space W k (fl) by
v is linear on each ! k;i
v vanishes at end-points of fl; namely x 0
k;fl and x N
and v is continuous on fl:
The multilevel mortar finite element spaces are now defined by:
ae
on each fl 2 Z;
R
for all - 2 S k (fl):
oe
The "mortaring" is done by constraining the jump across interfaces by the integral
equality above. We will call this constraint the weak continuity of functions in
Note that though the spaces f f
are nested,
f
the multilevel spaces fM k g are generally non-nested.
We next state the error estimates for the mortar finite element method. The
mortar finite element approximation of the solution U of Problem (2.1) (with F
given by (2.2)) is the function U
e
Z
\Omega
(3.
where e
A(u; v) is the bilinear form on e
V \Theta e
defined by
e
Z
It is shown in [2] that
A(u; u) for all
Here and in the remainder of this paper, we will use
C to denote a generic constant independent of h k which can be different at different
occurrences. It follows that (3.3) has a unique solution. It is also known (see [2])
that the mortar finite element approximation satisfies
We now define a projection, \Pi which will be very useful
in our analysis. For u 2 L 2 (fl), it can be shown [3] that there exists a unique
Z
Z
u- ds for all - 2 S k (fl):
We define \Pi k;fl u to be v. This projection is known to be stable in L 2 (fl) and H 1
i.e.,
under some weak assumptions on meshes (see [16]) which hold for the meshes
defined above.
The projector \Pi k;fl is clearly related to the weak continuity condition. Let fy j
denote the nodes of T k and the operator
M k be defined by (also see
Figures
It is easy to see that if e u is in f
is an element of M k .
We next define a basis for M k . Let f e
be the nodal basis for
f
. There are more than one basis element associated with a node which appears
in multiple subdomains. The basis for M k consists of functions of the form
For every vertex y l
k located in the open segment fl 2 Z and belonging to the non-
mortar side mesh, the corresponding OE l
k as defined above is zero. Every remaining
vertex y l
k leads to a nonzero OE l
k since OE l
k and e
OE l
k have the same nonzero value at
y l
Also, the values of OE l
k and e
OE l
k at all nodes which are not nodes from non-mortar
mesh lying in the interior of some fl 2 Z are the same. This implies that nonzero
functions in fOE i
k g are linearly independent. It is not difficult to check that these
also form a basis for at y l
k is one and all other OE i
are zero, these
functions, in fact, form a nodal basis. Denote by N k the total number of nonzero
6 JAYADEEP GOPALAKRISHNAN AND JOSEPH E. PASCIAK
Nonmortar
-0.4
-0.2
Figure
1. Two subdomains
with meshes that
do not align at interface.0.51.5-1
-0.4
Figure
2. A discontinuous
e u; which is 1 on a
mortar node and 0 on the
remaining nodes.
Mortar trace0.20.61
Figure
3. The thick
line shows e
the thin line shows
-0.4
Figure
4. Plot shows
e
by extending \Pi k;fl e u M
as
described by (3.8).
Illustrating the action of
We now re-index f e
in such a way that every nonzero OE i
k is in
fy i
k g in this new ordering.
Now that we have a nodal basis for M k ; we may speak of the corresponding
vertices of T k as degrees of freedom for Consider an interface segment
All vertices on fl are degrees of freedom except: (i) those on
@\Omega D ; and (ii) those on
fl and are from the nonmortar mesh. These are the vertices y i
4. Multigrid algorithm for the Mortar FEM
We will apply multigrid theory for non-nested spaces [5] to construct a variable V-cycle
preconditioner. Before giving the algorithm, we define a prolongation operator
and smoother. Later in this section, we will prove that our algorithm gives a
preconditioner which results in a preconditioned system with uniformly bounded
condition number.
First let us establish some notation: A k will denote the operator on M k ; generated
by the form e
A(\Delta; \Delta) i.e., A k is defined by
A(u; v) for all u;
The largest eigenvalue of A k is denoted by - k . For each basis element OE i
k , we define
to be the one dimensional subspace of M k spanned by OE i
k . Then
provides a direct sum decomposition of
4.1. Smoothing and Prolongation operators. We will use a smoother R k given
by a scaled Jacobi method i.e.,
A
where ff is a positive constant to be chosen later. Here, A
k and
are defined by
and
respectively. R k is symmetric in the (\Delta; \Delta) inner-product.
It will be proved in Section 5 that
There exists a positive number CR independent of k such that
(R k u; u); for all
In addition, I \Gamma R k A k is non-negative.
We now define "prolongation operators" I
Clearly, I k u needs to satisfy the weak continuity constraint (see Definition 3.2).
We define I k u by:
I k
In the next section we show that I k satisfies:
There exists a constant C fi independent of k such that
for all u in
Here P k is the e
A-adjoint of I k ; i.e.,
e
A(u; I k+1 OE) for all
Condition (C.2) is verified using the regularity of the underlying partial differential
equation.
4.2. The algorithm. Let m(k); positive integers depending on k
and
be defined by
The variable V-cycle preconditioner B k for defined as follows:
Algorithm 4.2:
1. For
2. For is defined recursively by:
(a) Set x
(b)
(c) Set y I k q; where q is given by
(d) Define y l for
We make the usual assumption on m(k) (cf. [5]):
(A.2): The number of smoothings m(k); increases as k decreases in such a way
that
holds
Typically fi 1 is chosen so that the total work required for a multigrid cycle is no
greater than the work required for application of the stiffness matrix on the finest
level. This condition is satisfied, if for instance,
The following theorem is the main result of this paper.
Theorem 4.1. Assume that (A.1) and (A.2) hold. There exists an ff and M ? 0
independent of J such that
A(u; u) for all
m(J) fi=2 .
The theorem shows that B J is a uniform preconditioner for the linear system
arising from mortar finite element discretization using M J even if Increasing
m(J) gives a somewhat better rate of convergence but increases the cost
of applying It suffices to choose ff above so that ff ! 1=C 1 where C 1 is as in
Lemma 4.4.
We use the following lemmas to prove Theorem 4.1. Their proofs will be given
in Section 5. First we state a lemma that is a consequence of regularity which will
be used in the proof of Condition (C.2).
Lemma 4.1. If (A.1) holds, then
0;\Omega
e
holds for all u in M
The next three lemmas are useful in analyzing the smoothing operator. We begin
with a lemma from the theory of additive preconditioners.
MULTIGRID FOR THE MORTAR FINITE ELEMENT METHOD 9
Lemma 4.2. Let the space V be a sum of subspaces
be a symmetric positive definite operator on V i and Q i be the L 2 projection
holds for all u in V:
Lemma 4.2 may be found stated in a different form in [14, Chapter 4] and we
do not prove it here. The following two lemmas are used in the proof of Condition
(C.1).
Lemma 4.3. For R k defined by (4.1), there exists a constant independent
of k such that (4.2) holds for all u in
Lemma 4.4. For all u in M k , there is a number C 1 not depending on J such that
e
where
k is the nodal basis decomposition.
We now prove the theorem.
Proof of Theorem 4.1: We apply the theorem for variable V-cycle in [4, Theorem
4.6]. This requires verification of Conditions (C.1) and (C.2).
Because of Lemma 4.3, (C.1) follows if we show that I \Gamma R k A k is non-negative,
i.e., for all
This is equivalent to showing that for all
(R
Fix
k be its nodal basis decomposition. Applying
Lemma 4.2 gives
(R
ff
ff
e
The non-negativity of I \Gamma R k A k follows provided that ff is taken to be less than or
equal to 1=C 1 where C 1 is as in Lemma 4.4.
Condition (C.2) is immediately seen to hold from Lemma 4.1. Indeed,
e
e
Here we have used the fact that - k - Ch \Gamma2
This proves (C.2) and thus completes
the proof of the theorem. 2
5. Proof of the lemmas
As a first step in proving Lemma 4.1, we prove that the operators fI k g are
bounded operators with bound independent of k. After proving Lemma 4.1, we
state and prove two lemmas used in the proof of Lemmas 4.3 and 4.4.
Lemma 5.1. There exists a constant C independent of k such that
for all
Proof: Fix . By definition, I k
on every interior vertex of the mesh
The above sum is taken over the vertices y i
k of
the\Omega NM(fl) mesh that lie on fl:
Here and elsewhere - denotes equivalence with constants independent of h k and
denotes the L 2 (fl) norm of the nonmortar trace of E k;fl u: By the L 2
stability of \Pi k;fl ,
Since u is in M denoting u M
by e; we have
where (\Delta; \Delta) fl denotes the L 2 (fl) inner-product. Applying the Cauchy-Schwarz inequality
to the right hand side, we have
where the last inequality follows from the approximation properties of S
Thus,
Applying the triangle inequality, an inverse inequality, and a trace theorem yields
That I k is bounded now follows by the triangle inequality, (5.1) and (5.5). 2
Proof of Lemma 4.1: The proof is broken into two parts. First, we prove that
holds for all u in M k\Gamma1 . Next, we show that
holds for all u in M k : Clearly the lemma follows using (5.7) to bound the first term
on the right hand side of (5.6) and the fact that - k - Ch \Gamma2
k .
Fix u in M k and set
e
be the solution of
Now u is the mortar finite element approximation to w from M k and hence by (3.4),
By the triangle inequality,
To estimate the second term of (5.10), we start by writing P
solves
e
The remainder v 2 satisfies
e
Here I denotes the identity operator. Then, by Lemma 5.1 and (3.4),
For the last term in (5.12), we proceed as in the proof of Lemma 5.1 (see (5.1))
to get
Setting
we have as in (5.3),
Let Q denote the L 2 projection into S Because of the approximation properties
of S Trivially, we also have that
Now since w is in H
1=2;@\Omega M(fl)
1=2;@\Omega NM(fl)
Since restriction to boundary is a continuous operator this becomes
Thus,
where we have used (3.4) in the last step. This gives (recall (5.13))
1+fi;\Omega which estimates the last term in (5.12).
For the middle term in (5.12), we find from (5.11) that
As in Lemma 5.1 (see (5.2) through (5.5)), we get that
This proves that jjjv 2 jjj - Ch k kA k uk
Combining the above estimates gives
Using this in (5.10) and applying Assumption (A.1) proves (5.6).
We next prove (5.7). Fix u in M k . Since k\Deltak
\Gamma1+fi;\Omega is the norm on the space in
the interpolation scale between V 0 and L
\Gamma1;\Omega
Thus it suffices to prove that
Given / in V , we will construct /
and
Assuming such a / k exists, we have
k/k
k/k
1;\Omega
k/k
1;\Omega
Inequality (5.15) then follows from
\Gamma1;\Omega - sup
0;\Omega k/k
1;\Omega
e
k/k
1;\Omega k/k
1;\Omega
e
To complete the proof, we need only construct / k satisfying (5.16) and (5.17).
For
M k be the L 2 projection of / into f
k . This projection is local
on\Omega i and satisfies (see [6]),
and
0;\Omega
To construct / k , we modify e
so that the result is in M k , i,e.,
We will now show that / k defined above satisfies (5.16). We start with
e
Using (5.18) on the first term on right hand side and using (5.1) on the remaining,
we get
e
Note that
e
by (5.2). Since / is in H
its trace on fl is in L 2 (fl): Moreover, / M
are equal. Hence,
e
where in the last step we have used a trace inequality. Using (5.18) and (5.19), we
then have,
e
k/k
Combining (5.21) and (5.20) gives (5.16).
14 JAYADEEP GOPALAKRISHNAN AND JOSEPH E. PASCIAK
It now remains only to prove (5.17). By the triangle inequality,
0;\Omega
0;\Omega
The first term on the right hand side is readily bounded as required by (5.19). For
the second term, as in (5.1),
0;\Omega
e
Inequality (5.17) now follows immediately from (5.21). This completes the proof of
Lemma 4.1. 2
We are left to prove the lemmas involving the smoother R k . A critical ingredient
in this analysis involves the decay properties of the projector \Pi k;fl away from the
support of the data. Specifically, we use the following lemma:
Lemma 5.2. Let v 2 L 2 (fl) be supported on oe ' fl: Then there is a constant c such
that for any set - ' fl disjoint from oe;
\Gammac dist(-; oe)
where dist(-; oe) is the distance between the sets - and oe:
Remark 5.1 Estimates similar to those in the above lemma for the L 2 -orthogonal
projection were given by Descloux [9]. Note that \Pi k;fl is not an L 2 -orthogonal
projection. For completeness, we include a proof for our case which is a modification
of one given in [18, Chapter 5].
Proof: Recall that a fl 2 Z is partitioned into sub-intervals ! k;i by the vertices
of the mesh
as the union of those
sub-intervals which intersect the support of v: Following the presentation in [18],
define r recursively, by letting r m be the union of those sub-intervals
of fl that are not in [l!m r l and which are neighbors of the sub-intervals of this set
(see
Figure
5). Further, let
We will now show that the L 2 norm of \Pi k;fl v on dm can be bounded by a constant
times its L 2 norm on r m : For all - 2 S k (fl) with support of - disjoint from r 0 ; we
have
Let -m 2 S k (fl), for m - 1, be defined by
ae
holds with -m in place of
-: Moreover, -m it vanishes on fl n
Z
dmn"
ds
Z
dm ""
ds
Z
rm
-m \Pi k;fl v ds:
Note that on each sub-interval of dm " "; -m is constant, and it takes the value
of \Pi k;fl v at the interior endpoint. Also, on the sub-intervals of r m ; -m is either
identically zero (if that sub-interval is part of r m " ") or takes the value of \Pi k;fl v on
ff fi
oe
Figure
5. An interface segment
one endpoint and zero on the other endpoint. From these observations, it is easy
to conclude that Z
dm ""
-m \Pi k;fl v ds - C k\Pi k;fl vk 2
and Z
rm
ds - C k\Pi k;fl vk 2
Thus,
Z
dmn"
ds
Z
dm ""
-m \Pi k;fl v ds
Z
rm
-m \Pi k;fl v ds - C k\Pi k;fl vk 2
Letting
0;dm ; the above inequality can be rewritten as q m -
It immediately follows that
'm
The lemma easily follows from (3.6) and the observation that the distance between
- and oe is O(mh). 2
Proof of Lemma 4.3: Fix
k be the nodal basis
decomposition. By Lemma 4.2,
(R
ff
l
ff
l
Note that the L 2 norm of every basis function OE i
k is O(h 2
Indeed, this is a standard
estimate for those basis functions that coincide with a usual finite element nodal
basis function on a subdomain. For the remaining basis functions, this follows from
the exponential decay given by Lemma 5.2. Thus,
(R
ff
On each
subdomain\Omega j we have that
k@
e
Combining the above inequalities gives
(R
The above inequality is equivalent to (4.2) and thus completes the proof of the
lemma.2
The proof of Lemma 4.4 requires a strengthened Cauchy-Schwarz inequality
which we provide in the next lemma. First, we introduce some notation. Define
the index sets e
k and N fl
k by
e
"\Omega NM(fl) g
Also denote the set [fN fl
Lemma 5.3. Let OE i
k and OE j
k be two basis functions of M k with
and y j
k be the corresponding vertices. Then, e
e
\Gammac jy i
e
where C and c are constants independent of k:
Proof: First, consider the case when y i
k and y j
k are on a same open interface
segment denote the set of triangles that have at least one vertex on
fl and are contained
denote the set of triangles that
have at least one vertex on fl and are contained
e
The first sum obviously satisfies the required inequality, because this sum is zero
whenever y i
k and y j
k are not vertices of the same triangle in
Now consider a triangle - 2 Recall that fl was subdivided by the non-
mortar mesh into sub-intervals ! k;i the union of two
or more of these sub-intervals which have the vertices of - as an end-point (see
Figure
R
k and OE j
are zero at
least on one vertex of -;
Now, recall that OE i
k and OE j
are obtained from e
k and e
k respectively, as described
by (3.9). Denote by s i and s j the supports of e
respectively. Then by
A
A
\Theta
\Theta
\Theta
A
A
A
A
\Delta?
\Omega M(fl)
\Omega NM(fl)
Shaded triangles form \Delta in
Unshaded triangles form \Delta out
Figure
6. Illustrating the notations in the proof of Lemma 5.3.
Lemma 5.2,
denotes the length of may easily be seen that
Further, by quasi-uniformity,
Split the sum over - 2 \Delta NM in (5.24) into a sum over triangles which have a vertex
lying in between y i
k and y j
k on fl; and a sum over the remaining triangles in \Delta NM .
We denote the former set of triangles as \Delta in
NM and latter as \Delta out
Note that the
number of triangles in \Delta in
NM is bounded by Cjy i
We first consider triangles in \Delta in
The observations of the previous paragraph
yield
in
in
A
exp
\Gammac jy i
\Gammac jy i
Now, for the sum over triangles in \Delta out
observe that one of the distances,
out
\Gammac jy i
\Theta
out NM
exp
\Gammac
The sum on the right hand side can be bounded by a summable geometric series.
out
\Gammac jy i
Thus, (5.25), (5.26) and (5.24) give
e
\Gammac jy i
This with the coercivity of e
A(\Delta; \Delta) on M k \Theta M k proves the lemma when y i
k and y j
lie on the same fl: Note that all the above arguments go through when either y i
k or
k is an endpoint of fl:
To conclude the proof, it now suffices to consider the case when y i
k ) is zero unless there is a triangle -
in T k which has one of its edges contained in fl 1 and another contained in In the
latter case, defining s i and s j to be the supports of e
respectively,
and using similar arguments as before, it is easy to arrive at an analogue of (5.25).
Specifically, if d ij is the distance from y i
k to y j
k when traversed along the broken
line
e
from which the required inequality follows as d ij - jy i
MULTIGRID FOR THE MORTAR FINITE ELEMENT METHOD 19
Proof of Lemma 4.4: Split u into a function u 0 that vanishes on the interface \Gamma
and a function u \Gamma that is a linear combination of OE i
By the triangle
inequality,
e
On each triangle - in T k ;
c i(- ;j) OE i(- ;j)
k on -;
are the vertices of -: Applying the arithmetic-geometric
mean inequality gives
e
e
All that remains is to estimate e
We clearly have
e
e
Applying Lemma 5.3 gives
e
\Gammac jy i
e
e
Here M is the matrix with entries
\Gammac jy i
and
denotes the cardinality of N \Gamma
k and '\Delta' indicates the standard dot product
in R jN \Gamma
To conclude the proof, it suffices to show that kMk ' 2 is bounded by a constant
independent of h k : Note that kMk ' 2 is equal to the spectral radius of M and
consequently, can be bounded by any induced norm. So,
For every fixed i; the sum on the right hand side can be enlarged to run over all
vertices of the mesh T k ; and then one obtains
exp
\Gammac jy j
ZZ
Thus
6. Implementation
This section will describe some details of implementing the mortar method and
the preconditioner B J . Since we shall be using a preconditioned iteration, all that
is necessary is the implementation of the action of the stiffness matrix and that of
the preconditioner.
Let A k denote the stiffness matrix for the mortar finite element method, i.e.,
be an element of M k . To apply A k to first expand v in the
basis f e
k g, apply the stiffness matrices for f
M k and finally accumulate A(v; OE i
. The application of the stiffness matrix corresponding to the space
f
k with nodal basis f e
k g is standard. As we shall see, the first and last steps are
closely related.
The first step above involves computing the nodal representation of a function v
with respect to the basis f e
given the coefficients fp i g appearing in (6.1). Thus,
we seek the vector e
e
e
e
Note that e
Thus, we only need to determine the values of
e
for the remaining indices. These indices appear in some set e
corresponding
to one of the interface segments. We define the transfer matrix T k;fl by
e
e
Then, for
The last step of accumulating e
is also implemented in
terms of T k;fl . Given the results of the stiffness matrix evaluation on f
i.e., the
vector of values e
k ), we need to compute e
k for nodes
which are not on any of the interface segments so we only need to compute e
for nodes such that i 2 N fl
k for some segment. This is given by
e
e
The sum on fl above is over the segments with i 2 N fl
k .
For convenient notation, let us denote by T k , the matrix of the linear process that
takes to fep
g. Then, the matrix corresponding
to f e
k )g is the transpose T t
We now discuss the implementation of the preconditioner Specifically, we
need a procedure that will compute the coefficients of B k v (in the basis fOE i
the values (v; OE i
. The corresponding matrix will be denoted by
The matrix that takes a vector f(w; OE i
k )g to coefficients
of R k w with respect to fOE i
will be denoted by R k : Finally, let C k be the matrix
associated with I k , i.e.,
I k OE i
Assuming B k\Gamma1 has been defined, we define B k g for an g 2 R Nk by:
1. Compute x l for
2. Set y
k q; where q is computed by
3. Compute y l for
4.
This algorithm is straightforward to implement as a recursive procedure provided
we have implementations of R
To compute q
k q k\Gamma1 , we first let e q We then apply the
coarse to fine interpolation corresponding to the imbedding f
k . This
gives a vector which we denote by e
q k . Then q k is given by the truncated vector
To compute the action of the transpose, q we start by defining e
to be the vector which extends q k by e q i
. Next we apply the adjoint
of the coarse to fine imbedding ( f
to define the vector e q k\Gamma1 . Then
Since our codes do not assemble matrices, we use the alternative smoother
where k is the largest eigenvalue of A k . This avoids the computation of the diagonal
entry e
The corresponding matrix operator R k is just multiplication
by \Gamma1
k .
We now show that this operator is a good smoother by showing that it satisfies
Condition (C.1). First, (4.2) holds for R k since by Lemma 4.3,
(R k v;
e
22 JAYADEEP GOPALAKRISHNAN AND JOSEPH E. PASCIAK
Now let v be in M k and p be as in (6.1). Then,
(R k A k v; A k
e
This shows that I \Gamma R k A k is non-negative and hence Condition (C.1) is satisfied.
7. Numerical Results
In this section we give the results of model computations which illustrate that
the condition numbers of the preconditioned system remain bounded as the number
of levels increase. The code takes as input general triangulations generated
independently on subdomains, recursively refines these triangulations by breaking
each triangle into four similar ones, solves a mortar finite element problem and
implements the mortar multigrid preconditioner.
We apply the mortar finite element approximation to the problem
where\Omega is the domain pictured in Figure 7 and f is chosen so that the solution
of (7.1) is y(y
domain\Omega is decomposed into sub-domains
and the subdomains are triangulated to get a coarse level mesh as shown
in
Figure
7. The triangulations were done using the mesh generator TRIANGLE
[17]. The smoother used was R k defined in the previous section and
Estimates of extreme eigenvalues of the operator B J A J were given by those of the
Level Minimum eigen- Maximum eigen- Condition Degrees of
J value of B J A J value of B J A J number freedom
Table
7.1. Conditioning of B J A J :
Lanczos matrix (see [15]). Note that the eigenvalues of B J A J coincide with those
of B J A J . As can be seen from Table 7.1, the condition numbers remain bounded
independently of the number of levels as predicted by the theory.
--R
The mortar finite element method with lagrange multipliers.
Domain decomposition by the mortar element method.
A new nonconforming approach to domain decom- position: the mortar element method
Multigrid Methods.
The analysis of multigrid algorithms with nonnested spaces or noninherited quadratic forms.
Some estimates for a weighted l 2 projection.
The Mathematical Theory of Finite Element Methods.
A hierarchical preconditioner for the mortar finite element method.
On finite element matrices.
PhD thesis
Elliptic Problems in Nonsmooth domains.
On the poisson equation with intersecting interfaces.
Multilevel Finite Element Approximation.
Iterative Methods for Sparse Linear Systems.
Uniform hp convergence results for the mortar finite element method.
Engineering a 2D Quality Mesh Generator and Delaunay Triangu- lator
Galerkin Finite Element Methods for Parabolic Problems.
--TR | mortar;multigrid;finite element method;v-cycle;domain decomposition;preconditioning |
344497 | A Theory-Based Representation for Object-Oriented Domain Models. | AbstractFormal software specification has long been touted as a way to increase the quality and reliability of software; however, it remains an intricate, manually intensive activity. An alternative to using formal specifications directly is to translate graphically based, semiformal specifications into formal specifications. However, before this translation can take place, a formal definition of basic object-oriented concepts must be found. This paper presents an algebraic model of object-orientation that defines how object-oriented concepts can be represented algebraically using an object-oriented algebraic specification language O-Slang. O-Slang combines basic algebraic specification constructs with category theory operations to capture internal object class structure, as well as relationships between classes. | INTRODUCTION
As the field of software engineering continues to evolve toward a more traditional engineering dis-
cipline, a concept that is emerging as important to this evolution is the use of formal specifications,
the representation of software requirements by a formal language [1],[2]. Such a representation
has many potential benefits, ranging from improvement of the quality of the specification itself
to the automatic generation of executable code. While some impressive results have emerged
from the utilization of formal specifications [3],[4], the development of formal specifications to
represent a user's requirements is still a difficult task. This has restricted adoption of formal
specifications by practitioners.
On the other hand, an approach to requirements modeling that has been gaining acceptance
is the use of object-oriented methods. Initially introduced as a programming paradigm, its application
has been extended to the entire software lifecycle. This informal approach, consisting
of graphical representations and natural language descriptions, has many variations, but Rum-
baugh's Object Modeling Technique (OMT) is typical, and perhaps the most widely referenced [5].
In OMT, three models are combined to capture the essence of a software system. The object
model captures the structural aspects of the system by defining objects, their attributes, and the
relationships (associations) between them. The behavior of the system is captured by the other
two models. The dynamic model captures the control flow as a classical state-transition model,
or statechart, while the functional model represents the system calculations as hierarchical data
flow diagrams and process descriptions. All three models are needed to capture the software
system's requirements, although for a given system one or two of the models may be of lesser
importance, or even omitted.
While systems such as KIDS [3] and Specware [6] have been making progress in software
synthesis, research in the acquisition of formal specifications has not been keeping pace. Formal
specification of software remains an intricate, manually intensive activity. Problems associated
with specification acquisition include a lack of expertise in mathematical and logical concepts
among software developers, an inability to effectively communicate formal specifications with
end users to validate requirements, and the tendency of formal notations to restrict solution
creativity [7]. Fraser et. al., suggest an approach to overcoming these problems via parallel
refinement of semi-formal and formal specifications. In a parallel refinement approach, designers
develop specifications using both semi-formal and formal representations, successively refining
both representations in parallel [7].
Fig. 1 shows our concept of a parallel refinement system for formal specification development.
In this system, a domain engineer would use a graphically-based object-oriented interface to
specify a domain model. This domain model would be automatically translated into formal
Class Theories stored in a library. A user knowledgeable in the domain would then use the
graphically-based object-oriented interface to refine the domain model into a problem specific
formal Functional Specification. Finally, a software engineer would map the Functional Specification
to an appropriate formal Architecture Theory, generating a specification capable of being
transformed to code by a system such as Specware.
Bailor Approved
System
Specifications
Specification Structuring
Domain Engineering Specification Generation
Object-Oriented User Interface
Specification Acquisition Mechanism
Domain
Knowledge
Domain Theory
Composition
Subsystem
Class
Theories
Theory Library (Abstract Types)
Problem
Requirements
Architecture
Theories
Specification
Generation/Refinement
Subsystem
Architecture
Matching
Subsystem
Functional
Specifications
Design
Decisions
Non-Functional
Functional
Figure
1: Parallel Refinement Specification Acquisition Mechanism
A critical element for the success of such a system is the definition of a formal representation
that captures all important aspects of object-orientation, along with a formal represention of the
syntax and semantics of the informal model and a mapping for ensuring the full equivalence of the
informal and formal models. While formal representation of the informal model has been done
in bits and pieces [8],[9],[10], a full, consistent, integrated formal object model does not exist.
This paper describes a method for fully representing an object-oriented model using algebraic
theories [11]. An algebraic language, O-Slang, is defined as an extension of Kestrel Institute's
Slang [12]. O-Slang not only supports an algebraic representation of objects, but allows the
use of category theory operations such as morphisms and colimits to combine primitive object
specifications to form more complex aggregates, and to extend object specifications to capture
multiple inheritance [13]. Using this formal representation along with formal transformations
from the informal model, we have demonstrated the automatic generation of formal algebraic
specifications from commercially available object oriented CASE tools.
The remainder of the paper is organized as follows. Section 2 discusses related work and Section
3 presents basic algebraic and category theory concepts. Section 4 introduces the basic object
model while Sections 5 through 7 describe inheritance, aggregation, and object communication
in more detail. Finally, Section 8 discusses our contributions and plans for the future.
Related Work
There have been a number efforts designed to incorporate object-oriented concepts into formal
specification languages. MooZ [14] and Object-Z [15] extend Z by adding object-oriented structures
while maintaining its model-based semantics. Z++ [16] and OOZE [17] also extend Z but
provide semantics based on algebra and category theory. Although these Z extensions provide
enhanced structuring techniques, they do not provide improved specification acquisition meth-
ods. FOOPS [18] is an algebraic, object-oriented specification language based on OBJ3 [19].
Both FOOPS and OBJ3 focus on prototyping, and provide little support for specification acqui-
sition. Some research has been directed toward improving specification acquisition by translating
object-oriented specifications into formal specifications [10]; however, these techniques are based
on Z and lack a strong notion of refinement from specification to code.
3 Theory Fundamentals
Theory-based algebraic specification is concerned with (1) modeling system behavior using algebras
(a collection of values and operations on those values) and axioms that characterize algebra
behavior, and (2) composition of larger specifications from smaller specifications. Composition
of specifications is accomplished via specification building operations defined by category theory
constructs [20]. A theory is the set of all assertions that can be logically proved from the axioms of
a given specification. Thus, a specification defines a theory and is termed a theory presentation.
In algebraic specifications, the structure of a specification is defined in terms of sorts, abstract
collections of values, and operations over those sorts. This structure is called a signature. A
signature describes the structure of a solution; however, a signature does not specify semantics.
To specify semantics, the definition of a signature is extended with axioms defining the intended
semantics of signature operations. A signature with associated axioms is called a specification.
An example of a specification is shown in Figure 2.
spec Array is
sorts E, I, A
operations
apply
axioms 8 E)
(i
Figure
2: Array specification
A specification allows us to formally define the internal structure of object classes (attributes
and operations); however, they do not provide the capability to reason about relationships between
object classes. To create theory-based algebraic specifications that parallel object-oriented
specifications, the ability to define and reason about relationships between theories, similar to
those used in object-oriented approaches (inheritance, aggregation, etc.), must be available. Category
theory is an abstract mathematical theory used to describe the external structure of various
mathematical systems [21] and is used here to describe relationships between specifications.
A category consists of a collection of C-objects and C-arrows between objects such that (1) there
is a C-arrow from each object to itself, (2) C-arrows are composable, and (3) arrow composition
is associative. An obvious example is the category Set where "C-objects" are sets and "C-
arrows" are functions between sets. However, of greater interest in our research is the category
Spec. Spec consists of specifications as the "C-objects" with specification morphisms as the "C-
arrows". A specification morphism, oe, is a pair of functions that map sorts (oe S
operations
spec Finite-Map is
sorts M, D, R
operations
apply
axioms 8
Figure
3: Finite map specification
(oe\Omega ) from one specification to compatible sorts and operations of a second specification such
that the axioms of the first specification are theorems of the second specification. Intuitively,
specification morphisms define how one specification is embedded in another. An example of a
morphism from array to finite-map (Figure 3) is shown below.
apply 7! applyg
Specification morphisms comprise the basic tool for defining and refining specifications. Our
toolset can be extended to allow the creation of new specifications from a set of existing spec-
ifications. Often two specifications derived from a common ancestor specification need to be
combined. The desired combination consists of the unique parts of two specifications and some
"shared part" common to both specifications (the part defined in the shared ancestor specifica-
tion). This combining operation is a colimit.
Conceptually, the colimit is the "shared union" of a set of specifications based on the morphisms
between the specifications. These morphisms define equivalence classes of sorts and operations.
For example, if a morphism, oe, from specification A to specification B maps sort ff to sort fi,
then ff and fi are in the same equivalence class and thus become a single sort in the colimit
specification of A, B, and oe. The colimit operation creates a new specification, the colimit
specification, and a specification morphism from each specification to the colimit specification.
An example showing the relationship between a colimit and multiple inheritance is provided in
Section 5.
From these basic tools (morphisms and colimits), we can construct specifications in a number of
ways [20]. We can (1) build a specification from a signature and a set of axioms, (2) form the union
of a set of specifications via a colimit, (3) rename sorts or operations via a specification morphism,
and (4) parameterize specifications. Many of these methods are useful in translating object-oriented
specifications into theory-based specifications. Detailed semantics of object-oriented
concepts using specifications and category theory constructs are presented next.
4 Object Classes
The building block of object-orientation is the object class which defines the structure of an object
and its response to external stimuli based its current state. Formally defined in Section 4.1 as
a class type, a class is a template from which individual object instances can be created. Fig. 4
shows a specification of a banking account class in O-Slang.
4.1 Class Structure
In our theory-based object model, we capture the structure of a class as a theory presentation,
or algebraic specification, as follows.
class type, C, is a signature,
hS;\Omega i and a set of
axioms, \Phi, over \Sigma (i.e., a theory presentation, or specification) where
S denotes a set of sorts including the class sort
\Omega denotes a set of functions over S
\Phi denotes a set of axioms over \Sigma
Sorts in S are used to describe collections of data values used in the specification. In O-Slang
a distinguished sort, the class sort, is the set of all possible objects in the class. In an algebraic
sense, this is really the set of all possible abstract value representations of objects in the class.
Functions
in\Omega are classified in O-Slang syntax as attributes, methods, state-attributes, states,
class Acct is
import Amnt, Date
class sort Acct
sorts Acct-State
operations
attributes
state-attributes
methods
create-acct
states
events
new-acct
axioms
state uniqueness and invariant axioms
operation definitions
method definitions
event definitions
8 (a: Acct, x: Amnt) acct-state(a)=ok
8 (a: Acct, x: Amnt) acct-state(a)=overdrawn
end-class
Figure
4: Object Class
events, and operations. Attributes are defined implicitly by visible functions which return specific
data values. In Fig. 4, the functions date and bal are attributes. Methods are non-visible functions,
invoked via visible events, that modify an object's attribute values. A method's domain includes
an object, along with additional parameters, while the return value is always the modified object.
In Fig. 4, the functions create-acct, credit and debit are methods. The semantics of functions,
as well as invariants between class attribute values, are defined using first order predicate logic
axioms. In general, axioms define methods by describing their effects on attribute values as in
the following example.
4.2 Class Behavior
States. In our model, a state is a partition of the cross-product of an object's attribute values.
For example, a bank account might be partitioned into an ok and an overdrawn state based on
a partitioning of its balance values. Formally, a class type has at least one state sort (multiple
state sorts allow modeling concurrent state models and substate models), a set of states which
are elements in a state sort (defined by nullary functions), a state attribute defined over each
state sort as a function which returns the current state of an object, and a set of state invariants,
axioms that describe constraints on class attributes that must hold true while in a given state.
In our object model, we separate state attributes from normal attributes to capture the notion
of an object's abstract state as might be defined in an statechart. The values of state attributes
of define an object's abstract state while the values of normal attributes define an object's true
state. In Fig. 4, the class state sort is Acct-State, the class state attribute is acct-state, the state
constants are ok and overdrawn, and the state invariants are
These axioms state that when the balance of an account is greater than or equal to zero, the
account must be in the ok state; however, when the balance of the account becomes less than zero,
the state must become overdrawn. While it is tempting to replace the implication operators with
equivalence operators, doing so would unnecessarily restrict subclasses derived from this class as
defined in Section 5. Additionally, the axiom
ok 6= overdrawn
ensures correct interpretation of the specification that states ok and overdrawn are distinct.
Events. Events are visible functions that allow objects to communicate with each other and
may directly modify state attributes. We present a more detailed discussion of the specificaiton
of this communication between objects in Section 7. As a side effect, receipt of an event may
cause the invocation of methods or the generation of events sent to other objects. Events are
distinct from methods to separate control from execution. This separation keeps us from having
to embed state-based control information within methods. Each class has a new event which
triggers the create method and initializes the object's state attributes. In Fig. 4, the functions
new-acct, deposit and withdrawal are events. The effect of these events on the class behavior,
which can be represented by the statechart in Fig. 5, is defined by a set of axioms similar to the
following axiom from Fig. 4.
OK overdrawn
deposit(a,x)/credit(a,x)
new-acct(d)
Figure
5: Account Statechart
Class Operations. Operations are visible functions that are generally used to compute derived
attributes and may not directly modify attribute values. In Fig. 4, the function acct-attr-equal is
an operation. Similar to methods and events, the semantics of operations are defined using first
order predicate logic axioms.
Inheritance
Class inheritance plays an important role in object-orientation; however, the correct use of inheritance
is not uniformly agreed upon. In our work we have chosen to use a strict form of
inheritance that allows a subclass object to be freely substituted for its superclass in any situa-
tion. This subtype interpretation was selected to simplify reasoning about the class's properties
and to keep it closely related to software synthesis concepts [6]. We believe the advantages of
strict inheritance outweight its disadvantages in our research since most arguments favoring a
less strict approach to inheritance - such as polymorphism and overloading - are much more
germane to implementation than to specification. Thus, as a subtype, a subclass may only extend
the features of its superclass. Liskov defines these desired effects as the "substitution property"
[22]:
If for each object
of type S there is an object
of type T such that for all programs P
defined in terms of T the behavior of P is unchanged when
is substituted for
, then S
is a subtype of T .
The only way to ensure the substitution property holds in all cases is to ensure that the
effects of all superclass operations performed on an object are equivalent in the subclass and the
superclass. To achieve this, inheritance must provide a mapping from the sorts, operations, and
attributes in the superclass to those in the subclass that preserve the semantics of the superclass.
This is the basic definition of a specification morphism (extended for O-Slang to map class-
sorts to class-sorts, attributes to attributes, methods to methods, etc.) and provides us a formal
definition of inheritance [13].
Specification morphisms map the sorts and operations of one algebraic specification into the
sorts and operations of a second specification such that the axioms in the first specification are
theorems in the second specification [13]. Thus, in essence, a specification morphism defines an
embedding of functionality from one specification into a second specification.
class D is said to inherit from a class C, denoted
there exists a specification morphism from C to D and the class sort of D is a sub-sort of the
class sort of C (i.e., D cs
provides a concise, mathematically precise definition of inheritance and ensures
the preservation of the substitution property as stated in Theorem 1 [11].
Theorem 1 Given a specification morphism, oe : C ! D, between two internally consistent
classes C and D such that D cs
, the substitution property holds between C and D.
Since we assume user defined specifications are initially consistent, we can ensure consistency
in a subclass as long as the user does not introduce new axioms in the subclass that redefine how
a method defined in the superclass affects an attribute also defined in the superclass.
An example of single inheritance using a subclass of the ACCT class is shown in Fig. 6. The
import statement includes all the sorts, functions, and axioms declared in the ACCT class directly
into the new class while the class sort declaration SAcct ! Acct states that SAcct is a sub-sort
of Acct, and as such, all functions and axioms that apply to an Acct object apply to a SAcct
object as well. A statechart for SACCT is shown in Fig. 7. The import operation defines a
specification morphism between ACCT and SACCT while the sub-sort declaration completes
the requirements of Definition 2 for inheritance. Therefore, SACCT is a valid subclass of ACCT
and the substitution property holds.
5.1 Multiple Inheritance
Multiple inheritance requires a slight modification to the notion of inheritance as stated in
Definition 2. The set of superclasses must first be combined via a category theory colimit
operation and then used to "inherit from".
Based on specification morphisms, the colimit operation composes a set of existing specifications
to create a new colimit specification [21]. This new colimit specification contains all the
sorts and functions of the original set of specifications without duplicating the "shared" sorts and
functions from a common "ancestor" specification. Conceptually, the colimit of a set of specifications
is the "shared union" of those specifications. Therefore, the colimit operation creates a
new specification, the colimit specification, and a morphism from each specification to the colimit
specification.
Definition 3 Multiple Inheritance - A class D multiply inherits from a set of classes fC 1 .
g if there exists a specification morphism from the colimit of fC 1 . C n
g to D such that the
class sort of D is a sub-sort of each of the class sorts of fC 1 . C n
g.
This definition states that all sorts and operations from each superclass map to sorts and
operations in the subclass such that the defining axioms are logical consequences of the axioms
class SAcct is
import Acct, Rate
class
operations
attributes
rate
methods
create-sacct
events
new-sacct
compute-interest : SAcct, Date ! SAcct
axioms 8 (d: Date, r: Rate, a, a1: SAcct)
operation definitions
method definition
8 (s: SAcct, a: Amnt)
8 (s: SAcct, a: Amnt)
event definitions
end-class
Figure
Savings Class
OK overdrawn
deposit(a,x)/credit(a,x)
rate-change(a,d,r)
rate-change(a,d,r)
compute-interest(a,d)
/int(a,d)
Figure
7: Savings Account Statechart
of the subclass. This implies that all operations defined in a superclasses are applicable in the
subclass as well. This definition ensures that the subclass D inherits, in the sense of Definition
2, from each superclass in fC 1 . C n g as shown in Theorem 2 below [11].
Theorem 2 Given a specification morphism from the colimit of fC 1 . C n g to D such that the
class sort of D is a sub-sort of each of the class sorts of fC 1 . C n g, the substitution property
holds between D and each of its superclasses fC 1 . C n g.
It is important to note that Definition 3 only ensures valid inheritance when the axioms defining
each operation in the superclass specifications fC 1 . C n
are complete. Failure to completely
define operations can result in an inconsistent colimit specifications [11].
We can use multiple inheritance to combine the features of a savings account with those of
a checking account, CACCT, as defined in Fig. 8. To compute the resulting class, the colimit
of the classes ACCT, SACCT, CACCT, and morphisms from ACCT to SACCT and CACCT is
computed as shown in Fig. 9, where an arrow labeled with an "i" represents an import morphism
and a "c" represents a morphism formed by the colimit operation. A simple extension of the
colimit specification with the class sort definition
Comb-Acct ! SAcct; CAcct
yields the desired class where Comb-Acct is a subclass of both SAcct and CAcct, as denoted by
the ! operator in the class sort definition. Fig. 10 shows the "long" version of the combined
specification signature with all the attributes, methods, and events inherited by the Comb-Acct
class (axioms are omitted for brevity).
class CAcct is
import Acct class sort CAcct ! Acct
operations
attributes
check-cost
methods
create-cacct
set-check-cost : CAcct, Amnt ! CAcct
events
new-cacct
change-check-cost : CAcct, Amnt ! CAcct
axioms 8 (a: CAcct, x: Amnt)
axioms omitted
end-class
Figure
8: Checking Class
SAcct CAcct
Acct
Comb-Acct
c
c
c
Figure
9: Colimit of Accounts
class Comb-Acct is
import SAcct, CAcct
class
sorts Acct-State
operations
attributes
rate
check-cost
state-attributes
methods
create-acct Comb-Acct
create-sacct Comb-Acct
create-cacct Comb-Acct
create-comb-acct Comb-Acct
Comb-Acct
Comb-Acct
Comb-Acct
Comb-Acct
set-check-cost Comb-Acct
Comb-Acct
states
events
new-acct Comb-Acct
new-sacct Comb-Acct
new-cacct Comb-Acct
new-comb-acct Comb-Acct
Comb-Acct
Comb-Acct
Comb-Acct
compute-interest Comb-Acct
change-check-cost Comb-Acct
Comb-Acct
axioms 8 (a: CAcct, x: Amnt)
axioms omitted
Figure
10: Combined Account Signature
6 Aggregation
Aggregation is a relationship between two classes where one class, the aggregate, represents an
entire assembly and the other class, the component, is "part-of" the assembly. Not only do aggregate
classes allow the modeling of systems from components, but they also provide a convenient
context in which to define constraints and associations between components. Aggregate class
behavior is defined by that of its components and the constraints between them. Thus aggregates
impose an architecture on the domain model and specifications derived from it.
Components of an aggregate class are modeled similarly to attributes of a class through the
concept of Object-Valued attributes. An object-valued attribute is a class attribute whose sort
type is a set of objects - the class-sort of another class. Formally, they are specification functions
that take an object and return an external object or set of objects. The effects of methods on
object-valued attributes are similar to those for normal attributes. However, instead of directly
specifying a new value for an object-valued attribute, an event is sent to the object stored in the
object-valued attribute. We can formally define an aggregate using the colimit operation and
object-valued attributes.
class C is an aggregate of a set of component classes, fD 1 ::D n g,
if there exists a specification morphism from the colimit of fD 1 ::D n g to C such that C has at
least one corresponding object-valued attribute for each class sort in fD 1 ::D n
g.
An aggregate class combines a number of classes via the colimit operation to specify a system
or subsystem. The colimit operation also unifies sorts and functions defined in separate classes
and associations to ensure that the associations actually relate two (or more) specific classes. To
capture a domain model within a single structure, we can create a domain-level aggregate. To
create this aggregate, the colimit of all classes and associations within the domain is taken.
6.1 Aggregate Structure
An aggregate consists of a number of classes and provides a convenient means to define additional
constructs and relationships. These constructs include class sets, individual components, and
associations.
6.1.1 Class Set
A class type definition specifies a template for creating new instances. In order to manage a set
of objects in a class, a class set is created for each class defined.
class set is a class whose class sort is a set of objects from a
previously defined object class, C. A class set includes a "class event" definition for each event
in C such that the reception of a class event by a class set object sends the corresponding event
in C to each object of type C contained in the class set object. If the class C is a subclass of
then the class set of C is a subclass of the class sets of D 1 :::D n .
The class set creates a class type whose class sort is a set of objects and defines some basic
functions on that set. For example, in Fig. 11 ACCT-CLASS imports the ACCT class specifica-
class Acct-Class is
contained-class ACCT
class sort Acct-Class
events
axioms
end-class
Figure
11: O-Slang Class Set Specification
tion and adds additional "class" events. These class events mirror the "object" events defined in
the class type and distribute the event invocation to each object in the class set. The resulting
specification is effectively a set of Acct objects. Using the category theory colimit operation, a
class type specification can be combined with a basic SET specification to automatically derive
the class set specification.
6.1.2 Specification of Components
Components may have either a fixed, variable, or recursive structure. All three structures use
object-valued attributes to reference other objects and define the aggregate. The difference
between them lies in the types of objects that are referenced and the functions and axioms
defined over object-valued attributes. In a fixed configuration, once an aggregate references a
particular object, that reference may not be changed. The ability of an aggregate object to
change the object references of its object-valued attributes is determined by whether a method
exists, other than the initialization method, to modify the object-valued attribute. If no methods
modify any object-valued attributes then the aggregate is fixed. If methods do modify the object-
valued attributes, then the aggregate is variable. A recursive structure is also easily represented
using object-valued attributes. In this case, an object-valued attribute is defined in the class
type that references its own class sort.
6.1.3 Associations
Associations model the relationships between an aggregate's components. We define a link as
a single connection between object instances and an association as a group of such links. A
link defines what object classes may be connected along with any attributes or functions defined
over the link. Link attributes and link functions are those that do not belong to any one of the
objects involved, but exist only when there is a link between objects. Formally, associations are
represented generically as a specification that defines a set of individual links. A link defines
a specification that uses object-valued attributes to reference individual objects from two or
more classes. Links may also define link attributes or functions in a manner identical to object
classes. Basically, a link is a class whose class-set is an association while an association is a set
of links. Associations with more than two classes are handled in a similar manner by simply
adding additional object-valued attributes.
Definition 6 Link A link is an object class type with two or more object-valued attributes.
Definition 7 Association An association is the class set of a link specification.
Multiplicity is defined as the number of links of an association in which any given object may
participate. For a binary association, an image operation is defined for each class in the associ-
ation. The image operation returns a set of objects with which a particular object is associated
and is used to define multiplicity constraints as shown in Fig. 12. For binary associations, we
allow five categories of association multiplicities: exactly one, many, optional, one or more, or
numerically specified. True ternary or higher level associations are relatively rare; however, they
exactly one 7!
many 7!
optional 7!
one or more 7!
numerically specified 7!
numerically specified 7!
Figure
12: Association Multiplicity Axioms
can be modeled using an association class. In a ternary association, the image operation returns
a set of object tuples associated with a given object. Since the output is a set of tuples, the same
multiplicity axioms shown in Fig. 12 apply.
6.1.4 Banking Example
An example of a link specification between a class of customers CUST (not illustrated) and the
ACCT class to associate customers with their accounts is shown in Fig. 13. The CA-Link link
specification can relate objects from the two classes without embedding internal references into
the classes themselves. Although the names of the object-valued attributes and sorts correspond
to the CUSTOMER and ACCT classes, the link specification does not formally tie the classes
together. This relationship is actually formalized in the aggregate specification. The association
between the ACCT class and the CUSTOMER class is shown in Fig. 14.
The CUST-ACCT class defines a set of CA-Link objects while its axioms define the multiplicity
relationships between accounts and customers, in this case exactly one customer per account while
each customer may have one or more accounts.
The CUSTOMER, ACCT, and CUST-ACCT classes are then combined to form an aggregate
BANK. The sorts from CUST and CUST-ACCT and the sorts from ACCT and CUST-ACCT
are unified via specification morphisms that define their equivalence as shown in Fig. 15. The
actual specification of the aggregation colimit BANK-AGGREGATE is not shown, but is further
refined into the aggregate specification for BANK seen in Fig. 16. The SET specification is used
to unify sorts while the INTEGER specification ensures only a single copy of integers is included.
Three copies of the SET specification are included since each class requires a unique set.
Once the BANK-AGGREGATE specification is computed, the CUST-ACCT association actually
associates the CUSTOMER class to the ACCT class. New functions and axioms can
link CA-Link is
class sort CA-Link
sorts Customer, Account
operations
attributes
customer
account
methods
events
axioms
operation definition
create method definition
8 (c: Customer, a: Account)
new event definition
8 (c: Customer, a: Account)
ca-link-attr-equal(new-ca-link(c,a), create-ca-link(c,a))
end-link
Figure
13: Customer Account Link
association Cust-Acct is
link-class CA-Link
class sort Cust-Acct
sorts Accounts, Customers
methods
image : Cust-Acct, Customer ! Accounts
Customers
events
new-cust-acct Cust-Acct
axioms
% multiplicity axioms
new event definition
. definition of image operations .
end-association
Figure
14: Cust-Acct Association
Acct-Class Cust-Class
Cust-Acct
Bank-Aggregate
c c
Integer
c
Bank
Figure
15: Aggregation Composition
class Bank is
import
class sort Bank
attributes
Cust-Acct
methods
aggregate methods defined here
update-accts
update-cust-acct Cust-Acct
events
start-account
axioms
definition of aggregate methods in terms of components here
invariants
% definition of operations
8 (b: Bank, a: Address, an: Acct-No, c: Customer)
% definition of methods
8 (b, b1: Bank, an: Acct-No, c: Customer)
add-account(b, c,
date built in
% definition of events
attr-equal(new-bank(), create-bank());
8 (b:Bank, an:Acct-No, c:Customer) attr-equal(start-account(b,c,an),add-account(b,c,an));
8 (b: Bank, an: Acct-No, c: Customer, am: Amount)
8 (b: Bank, an: Acct-No, c: Customer, am: Amount)
end-class
Figure
Aggregate Specification
be added to an extension of colimit specification, the BANK class type specification, shown in
Fig. 16, to describe aggregate-level interfaces and aggregate behavior based on component events
and methods.
6.2 Aggregate Behavior
Once an aggregate is created via a colimit operation, further specification is required to make
the aggregate behave in an integrated manner. First, new aggregate level functions are defined
to enable the aggregate to respond to external events. Then, constraints between aggregate
components are specified to ensure that the aggregate does not behave in an unsuitable or
unexpected manner. Finally, local event communication paths are defined. The definition of
new functions and constraints is discussed in this section while communication between objects
is discussed in Section 7.
6.2.1 Specification of Functionality
In an aggregate, components work together to provide the desired functionality. Functional
decomposition, often depicted using data flow diagrams (DFDs), is used to break aggregate-level
methods into lower-level processes. Processes defined in the functional model are mapped to
events and attributes defined in the aggregate components through aggregate-level axioms.
An example is shown by the data flow diagram in Fig. 17 for the aggregate method add-account,
used to implement aggregate event start-account. The make-deposit and make-withdrawal events
map directly to component events and do not require further functional decomposition.
The add-account process adds an account for an established customer and is defined in terms
of operations defined directly in the bank specification (Figure 16) or included into the bank
specification via the bank aggregate specification from the customer-account link specification
Figure
and the account specification (Figure 4). The following axiom defines add-account
in terms of these subprocesses and data flows as depicted in the data flow diagram.
add-account
customer
new-
ca-link
new-
account
acct
customer
acct
update-
cust-acct
Cust-Acct-Assoc
cust-acct
update-
accts
acct
cust-acct
Figure
17: Bank Aggregate Functional Decomposition
assume date is built in
The add-account method has two parameters, the bank object, b, plus an existing customer object
as shown in the data flow diagram, and returns the modified bank object, b1. The add-account
method is defined by its subprocesses. First, a new account acct is created by invoking the
new-acct process. This is passed to the update-accts process which stores the new account in the
account class, and to the new-ca-link process, along with the customer, which returns a cust-acct
link. Finally, the new cust-acct link is passed to the update-cust-acct process which stores it in
the cust-acct association. The new-acct and new-ca-link processes are the events defined in the
Acct class and the CA-Link association respectively and are already available via the aggregate.
The update-accts and update-cust-acct processes could already exist as part of the account class
and cust-acct association, but as shown here are defined in the aggregate specification.
6.2.2 Specification of Constraints Between Components
In an aggregate, component behavior must often be constrained if the aggregate is to act in an
integrated fashion. Generally, these constraints are expressed by axioms defined over component
attributes. Because the aggregate is the colimit of its components, the aggregate may access
components directly and define axioms relating various component attributes.
Engine Wheel
TransmissionDrives
AutomobileRPMs Conversion-Factor RPMs
Connected
Figure
Automobile Aggregate Functional Decomposition
A simplified automobile object diagram is shown in Fig. 18. The object diagram contains one
engine with an RPMs attribute, one transmission with a Conversion-Factor attribute, and four
wheels, each with an RPMs attribute. Two relationships exist between these objects, Drives, that
relates the transmission to exactly two wheels, and Connected, that relates two wheels (probably
by an axle). Obviously, there are a number of constraints implicit in the object diagram that
must be made explicit in the aggregate. First, the RPMs of the engine, Conversion-Factor of the
transmission, and RPMs of the wheels are all related. Also, the wheels driven by the transmission
must be "connected," and all "connected" wheels should have the same RPMs. The axiom
defines the relationship between the RPMs of the wheels driven by the transmission, the transmission
conversion-factor and the engine RPMs. In this case, wheel-obj is an object valued attribute
of a drives link that points to the two wheels connected to the transmission. The axiom
ensures that the two wheels connected by a connected link have the same RPMs values (here
wheel1 and wheel2 are the object valued attributes of the link). The final constraint, that the
two wheels driven by the transmission be connected, is specified implicitly in the specification of
the create-automobile method. After the transmission and wheel objects (w1, w2, w3, and w4)
are created in lines 1 through 5, drives and connected links are created and defined to ensure the
appropriate constraints are met in lines 6 through 9. Finally in line 10, the engine is created and
inserted into the automobile aggregate.
Because wheels w1 and w2 are associated with the transmission via the drives association in
the line 6, they are also associated together via the connected association in line 8. Thus, the
constraint is satisfied whenever an automobile aggregate object is created.
7 Object Communication
At this point our theory-based object model is sufficient for describing classes, their relationships,
and their composition into aggregate classes; however, how objects communicate has not yet been
addressed. For example, suppose the banking system described earlier has an ARCHIVE object
which logs each transaction as it occurs. Obviously, the ARCHIVE object must be told when a
transaction takes place. In our model, each object is aware of only a certain set of events that
it generates or receives. From an object's perspective, these events are generated and broadcast
to the entire system and received from the system. In this scheme, each event is defined in a
separate event theory as shown in Fig. 19.
An event theory consists of a class sort, parameter sorts, and an event signature that are
mapped via morphisms to sorts and events in the generating and receiving classes. If an event is
being sent to a single object then the event theory class sort is mapped to the class sort of that
event Archive-Withdrawal is
class sort Archive
sorts Acct, Amnt
events
Archive
end-class
Figure
19: Event Theory
object class. However, if the event theory class sort is mapped to the class sort of a class set then
communication may occur with a set of objects of that class. The other sorts in an event theory
class are the sorts of event parameters. The final part of an event theory, the event signature,
is mapped to a compatible event signature in the receiving class. The colimit of the classes, the
event theory, and the morphisms unify the event and sorts such that invocation of the event in
the generating class corresponds an invocation of the actual event in the receiving class.
To incorporate an event into the original ACCT class, the ARCHIVE-WITHDRAWAL event
theory specification is imported into the ACCT class and an object-valued attribute, archive-obj,
is added to reference the archival object. The axioms defining the effect of the withdrawal event
are modified to reflect the communication with the ARCHIVE object as shown below.
Basically, the axioms state that when a withdrawal event is received, the value of the archive-obj
is modified by the archive-withdrawal event defined in the event theory specification. Thus the
ACCT object knows it communicates with some other object or objects; however, it does not
know who they are. With whom an object communicates (or, for that matter, if the object
communicates at all) is determined at the aggregate-level where the actual connections between
communicating components are made.
The modified BANK aggregate diagram that includes the ARCHIVE-WITHDRAWAL event
theory and an ARCHIVE-CLASS specification is shown in Fig. 20. The colimit operation includes
morphisms from ARCHIVE-WITHDRAWAL to ACCT-CLASS and ARCHIVE-CLASS
that unify the sorts and event signature in ACCT-CLASS with the sorts and event signature of
ARCHIVE-CLASS. This unification creates the communication path between account objects
and archive objects.
Acct-Class Cust-Class
Cust-Acct
Bank
Archive-Withdrawal
Archive-Class
c
c
Figure
20: Bank Aggregate with Archive
Communicating with objects from multiple classes requires the addition of another level of
specification which "broadcasts" the communication event to all interested object classes. The
class sort of a broadcast theory is called a broadcast sort and represents the object with which the
sending object communicates. The broadcast theory then defines an object-valued attribute for
each receiving class. Fig. 21 shows an example of the ARCHIVE-WITHDRAWAL-MULT event
theory modified to communicate with two classes. In this case, the ARCHIVE-WITHDRAWAL
theory is used to unify the ARCHIVE-WITHDRAWAL-MULT with the ACCOUNT class as well
as the other two classes. A simplified version of the colimit diagram specification is shown in
Fig. 22.
Multiple receiver classes add a layer of specification; however, multiple sending classes are
handled very simply. The only additional construct required is a morphism from each sending
class to the event theory mapping the appropriate object-valued attribute in the sending class
to the class sort of the event theory and the event signature in the sending class to the event
signature in the event theory.
event Archive-Withdrawal-Mult is
class sort Archive
sorts Amnt, Acct, X, Y
attribute
events
Archive
axioms
8 (a: Archive, ac: Acct, am: Amnt)
end-class
Figure
21: Broadcast Theory
Acct-Class Cust-Class
Cust-Acct
Bank
Archive-Class
Printer-Class
Archive-Withdrawal-Mult
c c
c
c c
Archive-
Archive-
Archive-
c
c
c
c
Figure
22: Aggregate Using a Broadcast Theory
7.1 Communication Between Aggregate and Components
Communication between components is handled at the aggregate level as described above. How-
ever, when the communication is between the aggregate and one of its components, the unification
of object-valued attributes and class sorts via event theories does not work since the class sort
of the aggregate is not created until after the colimit is computed. The solution requires the use
of a sort axiom that equivalences two sorts as shown below:
Using the bank example discussed above, assume the archive-withdrawal event is also received by
the Bank aggregate. The archive-withdrawal event theory is included in the Account class type
and, by the colimit operation, the Bank aggregate. To enable the Bank aggregate to receive the
archive-withdrawal event, a sort-axiom is used in the Bank specification to equivalence the Bank
sort of the aggregate with the Archive sort from the event theory as shown below.
Archive
Use of the sort axiom unifies the Bank sort and the Archive sort and thus the signatures of the
archive-withdrawal events defined in the event theory and the Bank aggregate become equivalent.
Communication from the aggregate to the components, or subcomponents, is much simpler.
Since the aggregate includes all the sorts, functions, and axioms of all of its components and
subcomponents via the colimit operations, the aggregate can directly reference those components
by the object-valued attributes declared either in itself or its components. Because an aggregate
is aware of its configuration, determining the correct object-valued attribute to use is not a
problem.
8 Discussion of Results and Future Efforts
8.1 Object Model
Our research establishes a formal mathematical representation for the object-oriented paradigm
within a category theory setting. In our theory-based object model, classes are defined as theory
presentations or specifications and the basic object-oriented concepts of inheritance, aggregation,
association, and inter-object communication are formally defined using category theory opera-
tions. While some work formalizing aspects of object-orientation exists [23],[8],[18],[24], [25], ours
is the first to formalize all the important aspects of object-orientation in a cohesive, computationally
tractable framework. In fact, our formalization of inheritance, aggregation, and association,
provides techniques for ensuring the consistency of object-oriented specifications based on the
composition process itself.
The completeness of our integrated model allows the capture of any object-oriented model as a
formal specification. Furthermore, the algebraic language O-Slang allows for straight-forward
translation into existing algebraic languages such as Slang or Larch for further transformation
into executable code. Thus this model provides a bridge from existing informal CASE tools
to existing formal specification languages, tying the ease of use of the former to the technical
advantages of the latter.
8.2 Application of Object Model
To show the applicability of the theory-based object model, we developed a proof of concept
parallel-refinement specification acquisition system. This system used a commercially available,
OMT-based, object-oriented CASE tool to capture the informal specification. This included
graphical representation of the object, dynamic, and functional models along with textual input
in the form of method definitions and class-level constraints (neither of these have a graphical
format defined in OMT and are generally easier to define directly using first-order axioms).
The output from the user-interface was then parsed and translated into O-Slang based on
the theory-based object model. The translation from graphically-based input to O-Slang was
completely automated.
Two complete object-oriented domain models were developed using this system: a school
records database and a fuel pumping station. These domains were chosen to demonstrate the
wide diversity of domains, stressing both functional and dynamic aspects, supported by this
model. In total, over 37 classes, including 76 methods and operations, 89 attributes, 5 aggregates,
events, and 7 associations were specified. These domain models were sufficiently large and
diverse to demonstrate the application of the theory-based model to support realistic problem
domains.
8.3 Future Plans
The definition of theory-based models that can be mapped 1:1 to an informal representation
provides the necessary framework for a parallel refinement system for specification development
as shown in Fig. 1. Our theory-based object model allows for the development of a domain
model as a library of class theories. This O-Slang representation can next be transformed
into a Slang specification in a straightforward manner to allow the full use of the Specware
development system. The Specware system has already demonstrated the ability to generate
executable code from algebraic specifications. Thus the technology now exists to transform
informal object-oriented models to correct executable code.
While the class theories can be translated into code, the desired approach is to treat them
as a full domain model. From this, a specific specification can be developed for input to design
processing. Thus the next step is the development of the specification generation/refinement
subsystem in Fig. 1, an "elicitor-harvester" that will elicit requirements from a user by reasoning
over the domain model and harvesting components of the domain model to build the desired
specification. The ability to map between an informal model and the theory-based object model
will allow the user to interface with the system using a familiar informal representation, while
the formal model can support the reasoning needed to guide the user as well as assuring that the
harvested specification remains consistent with the constraints of the domain model.
Acknowledgments
This work has been supported by grants from Rome Laboratory, the National Security Agency,
and the Air Force Office of Scientific Research.
--R
"Report on a Knowledge-Based Software Assistant,"
"Software Engineering in the Twenty-First Century,"
"KIDS - A Semi-automatic Program Development System,"
"Transformational Approach to Transportation Scheduling,"
"Diagrams for Software Synthesis,"
"Strategies for Incorporating Formal Speci- fications,"
"A Formal Semantics for Object Model Diagrams,"
"Statecharts: A Visual Formalism for Complex Systems,"
"Teaching formal extensions of informal-based object-oriented analysis methodologies,"
Formal Transformations from Graphically-Based Object-Oriented Representations to Theory-Based Specification
Kestrel Institute
"Some Fundamental Algebraic Tools for the Semantics of Computation Part I: Comma Categories, Colimits, Signatures and Theories,"
"Specifying a Concept-recognition System in Z++,"
"Object-Z: An Object-Oriented Extension to Z,"
"A Comparative Description of Object-Oriented Specification Languages,"
"Specification in OOZE with Examples,"
"Unifying Functional, Object-Oriented and Relational Programming with Logical Semantics,"
"Introducing OBJ3,"
"Algebraic Specification: Syntax, Semantics, Structure,"
"Category Theory Definitions and Examples,"
"Data Abstraction and Hierarchy,"
"Practical Consequences of Formal Defintions of Inheritance,"
"An Algebraic Theory of Object-Oriented Systems,"
"Modelling Multiple Inheritance with Colimits,"
--TR
--CTR
Ana Mara Funes , Chris George, Formalizing UML class diagrams, UML and the unified process, Idea Group Publishing, Hershey, PA, | software engineering;domain models;transformation systems;formal methods |
344876 | Interactive control for physically-based animation. | We propose the use of interactive, user-in-the-loop techniques for controlling physically-based animated characters. With a suitably designed interface, the continuous and discrete input actions afforded by a standard mouse and keyboard allow for the creation of a broad range of motions. We apply our techniques to interactively control planar dynamic simulations of a bounding cat, a gymnastic desk lamp, and a human character capable of walking, running, climbing, and various gymnastic behaviors. The interactive control techniques allows a performer's intuition and knowledge about motion planning to be readily exploited. Video games are the current target application of this work. | Introduction
Interactive simulation has a long history in computer graphics, most
notably in flight simulators and driving simulators. More recently,
it has become possible to simulate the motion of articulated human
models at rates approaching real-time. This creates new opportunities
for experimenting with simulated character motions and be-
haviors, much as flight simulators have facilitated an unencumbered
exploration of flying behaviors.
Unfortunately, while the controls of an airplane or an automobile
are well known, the same cannot be said of controlling human
or animal motions where the interface between our intentions and
muscle actions is unobservable, complex, and ill-defined. Thus, in
order to create a tool which allows us to interactively experiment
with the dynamics of human and animal motions, we are faced with
the task of designing an appropriate interface for animators. Such
an interface needs to be sufficiently expressive to allow the creation
of a large variety of motions while still being tractable to learn.
Performance animation and puppetry techniques demonstrate
how well performers can manage the simultaneous control of a
http://www.dgp.utoronto.ca/~jflaszlo/interactive-control.html
large number of degrees of freedom. However, they are fundamentally
kinematic techniques; if considerations of physics are to be
added, this is typically done as a post-process. As a result, they do
not lend themselves well to giving a performer a sense of embodiment
for animated figures whose dynamics may differ significantly
from that of the performer. In contrast, the physics of our simulated
characters constrains the evolution of their motions in significant
ways.
We propose techniques for building appropriate interfaces for
interactively-controlled physically-based animated characters. A
variety of characters, motions, and interfaces are used to demonstrate
the utility of this type of technique. Figure 1 shows an example
interface for a simple articulated figure which serves as a starting
point for our work and is illustrative of how a simple interface
can provide effective motion control.
Figure
1: Interactive control for Luxo, the hopping lamp.
This planar model of an animated desk lamp has a total of 5
degrees of freedom (DOF) and 2 actuated joints, capable of exerting
joint torques. The motion is governed by the Newtonian laws
of physics, the internal joint torques, the external ground forces,
and gravity. The joint torques are computed using a proportional-derivative
(PD) controller, namely
. The
motions of the two joints are controlled by linearly mapping the
mouse position, (mx ; my ), to the two desired joint angles, d .
Using this interface, coordinated motions of the two joints correspond
to tracing particular time-dependent curves with the mouse.
A rapid mouse motion produces a correspondingly rapid joint mo-
tion. With this interface one can quickly learn how to perform a
variety of interactively controlled jumps, shuffles, flips, and kips,
as well as locomotion across variable terrain. With sufficient prac-
tice, the mouse actions become gestures rather than carefully-traced
trajectories. The interface thus exploits both an animator's motor
learning skills and their ability to reason about motion planning.
Figure
2 shows an example of a gymnastic tumbling motion created
using the interface. This particular motion was created using
several motion checkpoints. As will be detailed later, these facilitate
correcting mistakes in executing particularly unstable or sensitive
motions, allowing the simultion to be rolled back to previous points
in time. Figure 3 shows a user-controlled motion over variable terrain
and then a slide over a ski jump, in this case performed without
Figure
2: Example of an interactively controlled back head-springs
and back-flip for Luxo.
Figure
3: Example of an interactively controlled animation, consisting
of hops across variable terrain and a carefully timed push
off the ski jump.
the use of any checkpoints. The sliding on the ski hill is modelled
by reducing the ground friction coefficient associated with the sim-
ulation, while the jump is a combined result of momentum coming
off the lip of the jump and a user-controlled jump action.
This initial example necessarily provokes questions about scala-
bility, given that for more complex characters such as a horse or a
cat, one cannot hope to independently control as many input DOF
as there are controllable DOF in the model. One possible solution
is to carefully design appropriate one-to-many mappings from
input DOF to output DOF. These mappings can take advantage of
frequently occuring synergetic joint motions as well as known symmetry
and phase relationships.
We shall also explore the use of discrete keystrokes to complement
and/or replace continuous input DOF such as that provided
by the mouse. These enrich the input space in two significant ways.
First, keys can each be assigned their own action semantics, thereby
allowing immediate access to a large selection of actions. This action
repertoire can easily be further expanded if actions are selected
based upon both the current choice of keystroke and the motion
context of the keystroke. Second, each keystroke also defines when
to perform an action as well as the selection of what action. The
timing of keystrokes plays an important role in many of our prototype
interfaces.
In its simplest form, our approach can be thought of as sitting
squarely between existing virtual puppetry systems and physically-based
animation. It brings physics to virtual puppetry, while bringing
interactive interfaces to physically-based animation. The system
allows for rapid, free-form exploration of the dynamic capabilities
of a given physical character design.
The remainder of this paper is structured as follows. Section 2 reviews
previous related work. Section 3 describes the motion primitives
used in our prototype system. Section 4 illustrates a variety of
results. Finally, section 5 provides conclusions and future work.
Previous Work
Building kinematic or dynamic motion models capable of reproducing
the complex and graceful movements of humans and animals
has long been recognized as a challenging problem. The
book Making Them Move[3] provides a good interdisciplinary
primer on some of the issues involved. Using physical simulation
techniques to animate human figures was proposed as early
as 1985[2]. Since then, many efforts have focussed on methods of
computing appropriate control functions for the simulated actuators
which will result in a desired motion being produced. Among the
more popular methods have been iterative optimization techniques
[8, 13, 19, 23, 29, 30], methods based on following kinematically-
specified reference trajectories [15, 16], suitably-designed state machines
[9], machine learning techniques[14], and hybrids[5, 12].
A number of efforts have examined the interactive control of
dynamically-simulated articulated figures[10, 11] or procedurally-
driven articulated figures[6]. The mode of user interaction used in
these systems typically involves three steps: (1) setting or changing
specific parameters (2) running the simulation, and (3) observing
the result. This type of observe and edit tools is well suited to producing
highly specific motions. However, the interaction is less
immediate than we desire, and it does not lend a performer a sense
of embodiment in a character.
Motion capture and virtual puppetry both allow for user-in-the-
loop kinematic control over motions[17, 21, 24], and have proven
effective for specific applications demanding real-time animation.
The use of 2d user gestures to specify object motion[4] is in an
interesting early example of interactive computer mediated anima-
tion. Physical animatronic puppets are another interesting prece-
dent, but they cannot typically move in an uncontrained and dynamic
fashion in their environment. The system described in [7] is
a novel application of using a haptic feedback device for animation
control using a mapping which interactively interpolates between
a set of existing animations. Our work aims to expand the scope
of interactive real-time interfaces by using physically-based simu-
lations, as well as exploring interfaces which allow various degrees
of motion abstraction. Such interfaces could then perhaps also be
applied to the control of animatronic systems.
The work of Troy[25, 26, 27] proposes the use of manual manipulation
of several input devices to perform low-level control of the
movement of bipedal characters. The work documents experiments
with a variety of input devices and input mappings as having been
performed, although detailed methods and results are unfortunately
not provided for the manual control method. Nevertheless, this
work is among the first we know of that points out the potential of
user-in-the-loop control methods for controlling unstable, dynamic
motions such as walking.
Computer and video games offer a wide variety of interfaces
based both on continuous-input devices (mice, joysticks, etc.) and
button-presses and/or keystrokes. However, the current generation
of games do not typically use physically-based character anima-
tion, nor do they allow much in the way of fine-grained motion
control. Exceptions to the rule include fighting games such as Die
by the Sword[20] and Tekken[18]. The former allows mouse and
keyboard control of a physically-based model, limited to the motion
of the sword arm. The latter, while kinematic in nature, affords
relatively low-level control over character motions. Telerobotics
systems[22] are a further suitable example of interactive control
of dynamical systems, although the robots involved are typically
anchored or highly stable, and are in general used in constrained
settings not representative of many animation scenarios.
3 Motion Primitives
The motion primitives used to animate a character can be characterized
along various dimensions, including their purpose and their
implementation. In this section we provide a classification based
largely on the various interface methods employed in our example
scenarios.
The joints of our simulated articulated figures are all controlled
by the use of PD controllers. Motion primitives thus control mo-
time
structure
state/environment
instant
interval
sequence of
intervals
joint
limb
coordination
robust to initial state
robust to future state
non-robust
Figure
4: Three dimensions of control abstraction.
tions by varying the desired joint angles used by the PD controllers,
as well as the associated stiffness and damping parameters.
PD controllers provide a simple, low-level control mechanism
which allows the direct specification of desired joint angles. Coping
with more complex characters and motions necessitates some
form of abstraction. Figure 4 shows three dimensions along which
such motion abstraction can take place. The interfaces explored
in this paper primarily explore abstractions in time and structure
by using stored control sequences and coordinated joint motions,
respectively. The remaining axis of abstraction indicates the desirability
of motion primitives which perform correctly irrespective of
variations in the initial state or variations in the environment. This
third axis of abstraction is particularly challenging to address in an
automated fashion and thus our examples rely on the user-in-the-
loop to perform this kind of abstraction.
3.1 Continuous Control Actions
The most obvious way to control a set of desired joint angles is using
an input device having an equivalent number of degrees of free-
dom. The mouse-based interface for the hopping lamp (Figure 1)
is an illustration of this. It is interesting to note for this particular
example that although cursor coordinates are linearly mapped
to desired joint angles, a nonlinearity is introduced by the acceleration
features present in most mouse-drivers. This does not seem
to adversely impact the continuous control. In general, continuous
control actions are any mappings which make use of continuously
varying control parameters, and are thus not limited to direct mappings
of input DOF to output DOF.
The availability of high DOF input devices such as data-gloves
and 6 DOF tracking devices means that the continuous control of
input DOF can potentially scale to control upwards of 20 degrees of
freedom. However, it is perhaps unreasonable to assume that a performer
can learn to simultaneously manipulate such a large number
of DOF independently, given that precedents for interfaces in classical
puppetry and virtual puppetry are typically not this ambitious.
3.2 Discrete Control Actions
Discrete actions, as implemented by keystrokes in our interfaces, allow
for an arbitrary range of action semantics. Action games have
long made extensive use of keystrokes for motion specification, although
not at the level of detail that our interfaces strive to provide.
The following list describes the various action semantics used in
prototype interfaces, either alone or in various combinations. Some
of the actions in this list refer directly to control actions, while others
serve as meta-actions in that they modify parameters related to
the simulation and the interface itself.
set joint position (absolute) Sets desired position of joint or a set
of joints to a prespecified value(s). If all joints are set simultaneously
in order to achieve a desired pose for the figure, this
becomes a form of interactive dynamic keyframing.
adjust joint position (relative) Changes the desired position of a
joint or set of joints, computed relative to current desired joint
positions.
release Causes a hand or foot to grasp or release a nearby
point (e.g., ladder rung) or to release a grasped point.
select IK target Selects the target point for a hand or foot to reach
toward using a fixed-time duration IK trajectory, modelled
with a Hermite curve. The IK solution is recomputed at every
time step.
initiate pose sequence Initiate a prespecified sequence of full or
partial desired poses.
select next control state Allows transitions between the states of
a finite-state machine; useful for modelling many cyclical or
otherwise well-structured motions, leaving the timing of the
transitions to the performer.
rewind, reset state Restarts the simulation from a previous state
checkpoint.
set joint stiffness and damping Sets the stiffness and damping
parameters of the PD joint controllers to desired values.
select control mode Chooses a particular mapping for a continuous
input device, such as which joints the mouse controls.
set simulation rate Speeds up or slows down the current rate of
simulation; helps avoid a motion happening 'too fast' or 'too
slow' to properly interact with it.
set state checkpoint Stores the system state (optionally during re-
play/review of a motion) so that simulation may be reset to
the same state later if desired.
modify physical parameters Effects changes to simulation parameters
such as gravity and friction.
toggle randomized motion Begins or halts the injection of small
randomized movements, which are useful for introducing motion
variation.
Our default model for arbitration among multiple actions which
come into conflict is to allow the most recent action to pre-empt any
ongoing actions. Ongoing actions such as IK-based trajectories or
pose sequences are respectively preempted only by new IK-based
trajectories or pose sequences.
3.3 State Machines
Given the cyclic or strongly structured nature of many motions, state
machine models are useful in helping to simplify the complexities
of interactive control. For example, they allow separate actions
such as 'take left step' and `take right step' to be merged into a
single action 'take next step', where a state machine provides the
necessary context to disambiguate the action. As with many other
animations systems, state machines serve as the means to provide
apriori knowledge about the sequencing of actions for particular
classes of motion.
4 Implementation and Results
Our prototype system is based on a number of planar articulated
figures. The planar dynamics for these figures can easily be computed
at rates suitable for interaction (many in real-time) on most
current PCs and offer the additional advantage of having all aspects
of their motion visible in a single view, thereby providing unobstructed
visual feedback for the performer. Our tests have been
conducted primarily on a 450 Mhz Mac and a 366 Mhz PII PC.
While hard-coded interfaces were used with the original prototyping
system behind many of our results, our more recent system uses
Tcl as a scripting language for specifying the details of any given
interface. This facilitates rapid iteration on the design of any given
interface, even potentially allowing changes during the course of a
simulation.
4.1 Luxo Revisited
Using the continuous-mode mouse-based interface shown in Figure
1, the desklamp is capable of executing a large variety of hops,
back-flips, a kip manoevre, head-stands, and motion across variable
terrain. This particular interface has been tested on a large number
of users, most of whom are capable of performing a number of the
simpler movements within 10-15 minutes, given some instruction
on using the interface. Increasing the stiffness of the joints or scaling
up the mapping used for translating mouse position into desired
joint angles results in the ability to perform more powerful, dynamic
movements, although this also makes the character seem rather
too strong during other motions.
We have additionally experimented with a keystroke-based interface
using 14 keys, each key invoking a short sequence of pre-specified
desired poses of fixed duration. The various key actions
result in a variety of hops and somersaults if executed from the appropriate
initial conditions. The repertoire of action sequences and
associated keystrokes are given in the Appendix. The animator or
performer must choose when to execute keystrokes and by doing so
selects the initial conditions. The initiation of a new action overrides
any ongoing action.
The keystroke-based interface was created after gaining some
experience with the continuous-mode interface. It provides an increased
level of abstraction for creating motions and is easier to
learn, while at the same time trading away some of the flexibility
offered by the continuous-mode interface. Lastly, user-executed
continuous motions can be recorded and then bound to a keystroke.
4.2 Animating a Cat
Experiments with a planar bounding cat and a planar trotting cat are
a useful test of scalability for our interactive interface techniques.
Figure
5 illustrates the planar cat as well as sets of desired angles
assumed by the legs for particular keystrokes. In one control mode,
the front and back legs each have 6 keys assigned to them, each of
which drives the respective leg to one of the 6 positions illustrated
in the figure. The keys chosen for each pose are assigned a spatial
layout on the keyboard which reflects the layouts of the desired poses
shown in the figure. An additional pose is provided which allows
each leg to exert a larger pushing force than is otherwise available
with the standard set of 6 poses. This can be achieved by temporarily
increasing the stiffness of the associated leg joints, or by using
a set of hyperextended joint angles as the desired joint positions.
We use the latter implementation. This seventh overextended pose
is invoked by holding the control key down when hitting the key
associated with the backwards extended leg pose.
The animation sequence shown in Figure 6 was accomplished
using 12 checkpoints. A checkpoint lets the performer restart the
rear leg fore leg
'q' 'w' 'e'
'a' 's' 'd'
'u' 'i' 'o'
'j' 'k' 'l'
Figure
5: Parameterization of limb movements for cat.
simulation from a given point in time, allowing the piecewise interactive
construction of sequences that would be too long or too
error-prone to perform successfully in one uninterrupted attemp-
t. Checkpoints can be created at fixed time intervals or at will by
the performer using a keystroke. Some of the sequences between
checkpoints required only 2 or 3 trials, while particularly difficult
aspects of the motion required 10-20 trials, such as jumping the
large gap and immediately climbing a set of steps (second-last row
of
Figure
6).
The cat weighs 5 kg and is approximately 50 cm long, as measured
from the tip of the nose to the tip of the tail. Its small size
leads to a short stride time and requires the simulation to be slowed
down considerably from real-time in order to allow sufficient reaction
time to properly control its motions. The cat motions shown
in
Figure
6 were controlled using a slowdown factor of up to 40,
which allows for 10-15 seconds to control each bound.
It is important to note that there is a 'sweet spot' in choosing the
speed at which to interact with a character. Important features of
the dynamics become unintuitive and uncontrollable if the interaction
rate is either too slow or too fast. When the simulation rate
is too fast, the user is unable to react quickly enough to correct errors
before the motion becomes unsavable. When the motion is too
slow, the user tends to lose a sense of the necessary rhythm and timing
required to perform the motion successfully and lacks sufficient
immediate feedback on the effects of the applied control actions.
For basic bounding, a slowdown factor around 10, giving a bound
time of 2-3 seconds is sufficient. For more complex motions such
as leaping over obstacles, a factor of up to 40+ is required.
Figure
7 shows a trotting motion for a planar 4-legged cat mod-
el. The trotting was interactively controlled using only the mouse.
The x; y mouse coordinates are used to linearly interpolate between
predefined poses corresponding to the six leg poses shown in Figure
5. The poses are laid out in a virtual 2 3 grid and bilinear
interpolation is applied between the nearest 4 poses according to
the mouse position. The simplest control method assumes a fixed
phase relationship among the 4-legs, allowing the mouse to simultaneously
effect coordinated control of all legs. A more complex
method uses the same mapping to control one leg at a time. This
latter method met with less success, although was not pursued at
length. The cat model is comprised of articulated links, which
makes it somewhat slow to simulate, given that we currently do not
employ O(n) forward dynamics methods.
4.3 Bipedal Locomotion
We have experimented with a number of bipedal systems which
are capable of more human-like movements and behaviors such as
walking and running. For these models, we make extensive use
Figure
Cat bounding on variable terrain using piecewise interactive
key-based control. The frames shown are manually selected for
visual clarity and thus do not represent equal samples in time. The
arrows indicate when the various checkpoints were used, denoting
the position of the shoulders at the time of the checkpoint.
Figure
7: Cat trot using continuous mouse control. The animation
reads from top-to-bottom, left-to-right. The first seven frames represent
a complete motion cycle. The frames are equally spaced in
time.
of a hybrid control technique which mixes continuous and discrete
input actions in addition to purely discrete methods similar to those
used with the cat and Luxo models. We have also experimented
with a wide variety of other bipedal motions in addition to walking
and running, including a number of motions such as a long jump
attempt and a fall-recovery sequence that are readily explored using
interactive control techniques.
Figure
8 shows the interface for an interactive walking control
experiment. The mouse is used to control the desired angles for the
hip and knee joints of the swing leg. A keypress is used to control
when the exchange of stance and swing legs occurs and therefore
changes the leg currently under mouse control. The stance leg assumes
a straight position throughout the motion. The bipedal figure
has human-like mass and dimensions, although it does not have a
separate ankle joint. In our current implementation, joint limits are
not enforced, although such constraints can easily be added to the
simulation as desired.
An example of the resulting motion is shown in Figure 9. With
some practice, a walk cycle can be sustained indefinitely. With significant
practice, the walk can be controlled in real-time, although
a simulation speed of 2-3 times slower than real-time provides a
left leg
right leg
Figure
8: Interface for interactive control of bipdal walking.
sweet-spot for consistent interactive control. It is also possible to
choose a particular (good) location for the mouse, thus fixing the
desired joint angles for the swing leg, and achieve a marching gait
by specifying only the the time to exchange swing and stance legs
by pressing a key. This marching motion is quite robust and is able
to traverse rugged terrain with reasonable reliability. Yet another
mode of operation can be achieved by automatically triggering the
swing-stance exchange when the forward lean of the torso exceeds
a fixed threshold with respect to the vertical. With this automatic
mechanism in place, it is then possible to transition from a marching
walk to a run and back again by slowly moving only the mouse
through an appropriate trajectory.
Figure
9: Bipedal walking motion
Figure
shows the results of a biped performing a long-jump
after such an automatic run. This particular biped dates from earlier
experiments and is smaller in size and mass than the more anthropomorphic
biped used for the walking experiments. This motion
makes use of the same interface as for the bipedal walking motion,
shown in in Figure 8. A slowdown factor of up to 80 was necessary
because of the small size of the character, as well as the precision
required to achieve a final landing position having the legs extended
and the correct body pitch. Approximately 20 trials are required to
achieve a recognizable long jump, each beginning from a motion
checkpoint one step before the final leap. However, we anticipate
that the interface can be also be improved upon significantly by using
a more reasonable default behavior for the uncontrolled leg.
Figure
10: A long jump attempt.
4.4 Bipedal Gymnastics
Several other experiments were carried out using the bipedal figures
with continuous-mode mouse control and one or more keys to select
the mapping of the continuous input onto the model's desired joint
angles. The basic types of motion investigated include a variety
of climbing modes both with the bipedal model "facing" the view
plane and in profile in the view plane, and swinging modes both
with arms together and separated. Nearly every mapping for these
control modes uses the mouse y coordinate to simultaneously drive
the motion of all limb joints (hips, knees, shoulders and elbows)
in a coordinated fashion and the mouse x coordinate to drive the
bending of the waist joint to alter the direction of the motion.
The control modes differ from each other primarily in the particular
symmetries shared between the joints. Figure 11 illustrates
two forms of symmetry used for climbing "gaits" similar in pattern
to those of a quadruped trotting and bounding. The mapping
of the mouse x coordinate onto the waist joint is also shown. The
control modes can produce interactive climbing when coupled with
a state machine that grasps and releases the appropriate hands and
feet each time a key is pressed (assuming that the hands and feet are
touching a graspable surface). Swinging modes perform in a similar
manner but use the mouse x or y coordinate to swing the arms
either back-and-forth at the shoulder or in unison and can make use
of either graspable surfaces or ropes that the user can extend and retract
from each hand on demand. When used on the ground without
grasping, these same modes of interaction can produce a range of
gymnastic motions including handstands and different types of flips
and summersaults, in addition to a continuously controlled running
motion. Among the various interesting motions that are possible
is a backflip done by running off a wall, a gymnastic kip from a
supine position to a standing position and a series of giant swings as
might be performed on a high bar. While not illustrated here, these
motions are demonstrated in the video segments and CD-ROM animations
associated with this paper.
Figure
11: Control modes useful for climbing and gymnastics.
Left-to-right, top-to-bottom: "bound" pattern climbing; "trot" pattern
climbing; directional control.
4.5 Using IK Primitives
Figures
12 and 13 illustrate interactively-controlled movements on
a set of irregularly-spaced monkeybars and a ladder, respectively.
These are movements which require more precise interactions of
the hands and feet with the environment than most of the other motions
discussed to date. To deal with this, we introduce motion
primitives which use inverse kinematics (IK) to establish desired
joint angles.
In general, IK provides a rich, abstract motion primitive that
can appropriately hide the control complexity inherent in many
semantically-simple goal-directed actions that an interactive character
might want to perform. This reduces the associated learning
curve the user faces in trying to discover how to perform the ac-
tion(s) from first principles while still taking good advantage of the
user's intuition about the motion.
Figure
12: Traversing a set of irregularly-spaced monkeybars.
The interface for monkey-bar traversal consists of keystrokes and
a state machine. IK-based trajectories for the hands and feet are invoked
on keystrokes. The hand-over-hand motion across the mon-
keybars is controlled by keys which specify one of three actions for
the next release-and-regrasp motion. The actions causes the hand
to release its grasp on the bar and move towards the previous bar,
the current bar, or the following bar. These actions can also be invoked
even when the associated hand is not currently grasping a bar,
which allows the figure to recover when a grasp manoevre fails due
to bad timing. The interface does not currently safeguard against
the premature execution of a regrasp motion with one hand while
the other has not yet grasped a bar. The character will thus fall in
such situations. A grasp on a new bar is enacted if the hand passes
close to the target bar during the reaching action, where 'close' is
defined to be a fixed tolerance of 4 cm in our example. Controlling
the motion thus involves carefully choosing the time in a swing at
which a new reach-and-grasp action should be initiated, as well as
when to pull up with the current support arm. More information
about the particulars of the interface is given in the Appendix.
The ladder climbing example is made up of a number of keys
which serve to position the body using the hands and feet which
are in contact with the ladder, as well as a key to initiate the next
limb movement as determined by the state machine. The details
of the interface are given in the appendix, as well as the specific
sequence used to create Figure 13. Note, however, that
the keystroke sequence by itself is insufficient to precisely recreate
the given motion, as the timing of each keystroke is also important
in all the motions discussed.
Figure
13: Climbing a ladder.
Finally, Figure 14 illustrates a standing up motion, followed by
a few steps, a forward fall, crouching, standing up, and, lastly, a
backwards fall. A set of serves as the interface for this
scenario, as documented in the appendix.
Figure
14: Fall recovery example.
5 Conclusions and Future Work
We have presented prototype interfaces for the interactive control of
physically-based character animation. The techniques illustrate the
feasibility of adding physics to virtual puppetry, or, alternatively,
adding interactive interfaces to physically-based animation. They
allow human intuition about motions to be exploited in the interactive
creation of novel motions.
The results illustrate that dynamic motions of non-trivial articulated
figure models can be reasonably controlled with user-in-the-
loop techniques. Our experiments to date have focussed on first
achieving a large action repertoire for planar figures, with the goal
of using this experience as a suitable stepping stone towards 3D
motion control. While it is not clear that the interaction techniques
will scale to the type of nuanced 3D motion control need for foreground
character animation in production animation, the interfaces
could be readily applied to a new generation of physically-based
interactive video games. The interfaces provide a compelling user
experience - in fact, we found the interactive control experiments
to be quite addictive.
One of the drawbacks of using interactive control is the effort required
in both designing an appropriate interface and then learning
to effectively use the interface. These two nested levels of experimentation
necessitate a degree of expertise and patience. We are
optimistic that tractable interfaces can be designed to control sylis-
tic variations of complex motions and that animators or game players
can learn these interfaces with appropriate training and practice.
Our work has many directions which require further investiga-
tion. We are still far from being able to reproduce nuanced dynamic
motions for 3D human or animal characters[1, 28]. The large
variety of high-DOF input devices currently available offers a possible
avenue of exploration. Haptic devices may also play a useful
role in constructing effective interfaces[7]. Nuanced performance
may potentially require years of training, as it does for other arts
(key-frame animation, dancing, music) and sports. We can perhaps
expect that the instruments and interfaces required for composing
motion to undergo continual evolution and improvement. A large
community of users would offer the potential for a rapidly evolving
set of interfaces for particular motions or characters.
Many dynamic motions would benefit from additional sensory
feedback, such as an animated update of the location of the center
of mass[25]. Going in the opposite direction, one could use an
interactive environment like ours to conduct experiments as to the
minimal subset of sensory variables required to successfully control
a given motion. Questions regarding the transfer of skills between
interfaces and between character designs are also important to address
if broad adoption of interactive control techniques is to be
feasible.
The derivation of high-level abstractions of motion control is of
interest in biomechanics, animation, and robotics. The training data
and insight gained from having a user-in-the-loop can potentially
help in the design of autonomous controllers or high-level motion
abstractions. A variety of hybrid manual/automatic control methods
are also likely to be useful in animation scenarios.
Beyond its application to animation, we believe the system also
has potential uses in exploring deeper issues involved in controlling
motions for biomechanics, robotics, or animation. What
constitutes a suitable motor primitive or 'motor program'? How
can these primitives be sequenced or overlaid in order to synthesize
more complex motions? In what situations is a particular motion
primitive useful? Our experimental system can serve as a tool
towards exploring these questions by allowing interactive control
over the execution and sequencing of user-designed motion-control
primitives.
Acknowledements: We would like to thank all of the following
for their help: the anonymous reviewers for their comments; the
Imager lab at UBC for hosting the second author during much of the
work on this paper; and David Mould for suggestions and assistance
investigating the automatic bipedal marching and running motions.
This work was supported by grants from NSERC and CITO.
A
Appendix
Details of keystrokes interface for Luxo:
small hop
l medium hop
large hop
high backward hop
small backward hop
y back somersault
s sitting to upright (slow)
d standing to sitting / LB to sitting
f LB to standing (small height) / standing to sitting
e LB to standing (medium height) / standing to LB
w standing to sitting / sitting to LB / LB to standing
q big jump from base to LB / fwd somersault from LB
a LB to standing with small jump
single action that performs either A or B
depending on initial state
lying on back
Interface and keystrokes for monkeybar example:
a grasp rung previous to CR
s grasp CR
d grasp rung following CR
f grasp rung two rungs following CR
q release with both hands, relax arms
e pull up using support arm
R reset to initial state
toggle defn of support/grasp arm
closest rung
Interface and keystrokes for ladder climbing example:
q release both hands, fall from ladder
f grasp two rungs higher with next grasp arm
h shift body up
b lower body down
pull body in with arms
push body out with arms
push body out with legs
pull body in with legs
R reset to initial state
Interface and keystrokes for the fall recovery example:
ST, prepare for forwards fall
prepare for backwards fall
t HK, step back with left arm
y HK, step back with right arm
q HK, shift body back
w HK, bend elbows, prepare for push up
W HK, straighten elbows, push up
1 CR, straighten hips, knees, ankles
pose towards being upright
3 CR, assume final upright pose
c ST, step backwards with left leg
v ST, step forwards with left leg
b ST, step backwards with right leg
forwards with right leg
lean back at hips
R reset to initial state
checkpoint current state
L restart at checkpoint state
standing
on hands and knees
crouched
--R
Emotion from mo- tion
The dynamics of articulated rigid bodies for purposes of animation.
Making Them Move.
Interactive computer-mediated animation
Interactive animation of personalized human locomotion.
Using Haptic Vector Fields for Animation Motion Control.
Further experience with controller-based automatic motion synthesis for articulated figures
Animating human athletics.
Interactive control of biomechanical animation.
Techniques for interactive manipulation of articulated bodies using dynamic analysis.
Interactive design of computer-animated legged animal motion
Automated learning of muscle-actuated locomotion through control abstraction
Animating human locomotion with inverse dynamics.
Understanding Motion Capture for Computer Animation and Video Games.
Spacetime constraints revisited.
Die by the sword.
Dynamic digital hosts.
Evolving virtual creatures.
Computer puppetry.
Dynamic Balance and Walking Control of Biped Mechanisms.
Interactive simulation and control of planar biped walking devices.
Fourier principles for emotion-based human figure animation
Virtual wind-up toys for animation
--TR
Interactive design of 3D computer-animated legged animal motion
Goal-directed, dynamic animation of human walking
Techniques for interactive manipulation of articulated bodies using dynamic analysis
Telerobotics, automation, and human supervisory control
Sensor-actuator networks
Spacetime constraints revisited
Evolving virtual creatures
Automated learning of muscle-actuated locomotion through control abstraction
Animating human athletics
Fourier principles for emotion-based human figure animation
Further experience with controller-based automatic motion synthesis for articulated figures
Emotion from motion
NeuroAnimator
Understanding Motion Capture for Computer Animation and Video Games
Computer Puppetry
User-Controlled Physics-Based Animation for Articulated Figures
Using Haptic Vector Fields for Animation Motion Control
--CTR
Ari Shapiro , Petros Faloutsos, Interactive and reactive dynamic control, ACM SIGGRAPH 2005 Sketches, July 31-August
S. C. L. Terra , R. A. Metoyer, Performance timing for keyframe animation, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Rubens Fernandes Nunes , Creto Augusto Vidal , Joaquim Bento Cavalcante-Neto, A flexible representation of controllers for physically-based animation of virtual humans, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Peng Zhao , Michiel van de Panne, User interfaces for interactive control of physics-based 3D characters, Proceedings of the 2005 symposium on Interactive 3D graphics and games, April 03-06, 2005, Washington, District of Columbia
Wai-Chun Lam , Feng Zou , Taku Komura, Motion editing with data glove, Proceedings of the 2004 ACM SIGCHI International Conference on Advances in computer entertainment technology, p.337-342, June 03-05, 2005, Singapore
Mira Dontcheva , Gary Yngve , Zoran Popovi, Layered acting for character animation, ACM Transactions on Graphics (TOG), v.22 n.3, July
Matthew Thorne , David Burke , Michiel van de Panne, Motion doodles: an interface for sketching character motion, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Matthew Thorne , David Burke , Michiel van de Panne, Motion doodles: an interface for sketching character motion, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts
T. Igarashi , T. Moscovich , J. F. Hughes, Spatial keyframing for performance-driven animation, Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 29-31, 2005, Los Angeles, California
T. Igarashi , T. Moscovich , J. F. Hughes, Spatial keyframing for performance-driven animation, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts
Michael Neff , Eugene Fiume, Modeling tension and relaxation for computer animation, Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 21-22, 2002, San Antonio, Texas
J. McCann , N. S. Pollard , S. Srinivasa, Physics-based motion retiming, Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, September 02-04, 2006, Vienna, Austria
Slvio Csar Lizana Terra , Ronald Anthony Metoyer, A performance-based technique for timing keyframe animations, Graphical Models, v.69 n.2, p.89-105, March, 2007
C. Karen Liu , Zoran Popovi, Synthesis of complex dynamic character motion from simple animations, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
C. Karen Liu , Aaron Hertzmann , Zoran Popovi, Learning physics-based motion style with nonlinear inverse optimization, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Pankaj K. Agarwal , Leonidas J. Guibas , Herbert Edelsbrunner , Jeff Erickson , Michael Isard , Sariel Har-Peled , John Hershberger , Christian Jensen , Lydia Kavraki , Patrice Koehl , Ming Lin , Dinesh Manocha , Dimitris Metaxas , Brian Mirtich , David Mount , S. Muthukrishnan , Dinesh Pai , Elisha Sacks , Jack Snoeyink , Subhash Suri , Ouri Wolefson, Algorithmic issues in modeling motion, ACM Computing Surveys (CSUR), v.34 n.4, p.550-572, December 2002 | user interfaces;physically based animation |
344985 | Sequential Regularization Methods for Simulating Mechanical Systems with Many Closed Loops. | The numerical simulation problem of large multibody systems has often been treated in two separate stages: (i) the forward dynamics problem for computing system accelerations from given force functions and constraints and (ii) the numerical integration problem for advancing the state in time. For the forward dynamics problem, algorithms have been given with optimal, linear complexity in the number of bodies, in case the system topology does not contain many closed loops.But the interaction between these two stages can be important. Using explicit time integration schemes, we propose a sequential regularization method (SRM) that has a linear complexity in the number of bodies per time step, even in the presence of many closed loops. The method also handles certain types of constraint singularity. | Introduction
There has been a growing interest in the development of more efficient algorithms for
multibody dynamics simulations. The increase in size and complexity of spacecraft
and robotic systems is one motivation for this development; another is physically-based
modeling in computer graphics. The numerical simulation process has been
Institute of Applied Mathematics and Department of Computer Science, University of British
(ascher@cs.ubc.ca). The work of this author was
partially supported under NSERC Canada Grant OGP0004306.
y Institute of Applied Mathematics, University of British Columbia, Vancouver, B.C., Canada
V6T 1Z2. New address: Program in Scientific Computing and Computational Math, Stanford
University, Durand 262, Stanford, CA 94305-4040, USA (plin@sccm.stanford.edu).
typically treated as two separate stages. The first stage consists of the forward dynamics
problem for computing system accelerations, given the various constraints,
torque and force functions. For tree-structured multibody systems, algorithms have
been proposed with optimal O(n) complexity, where n is the total number of rigid
bodies in the system (see e.g. [10, 13, 18, 6, 21]). They have also been extended
to cope with systems with a small number of closed loops compared with the total
number n of links [10, 18]. For a system with m closed loops (m ! n), a typical
complexity of O(n obtained using a cut-loop technique. But it appears to
be hard to find an O(n) algorithm for chains with a large number (e.g.
closed loops.
The second stage of the simulation algorithm design addresses the numerical integration
problem for advancing the state in time, obtaining generalized body positions
and velocities from the computed accelerations. Explicit or implicit time discretization
schemes can generally be used.
While these two stages are usually treated separately, there are situations in which
the specific treatment of one affects the other, so a global, unified view is beneficial
(e.g. [6, 1]). In this paper we use such a global view of the simulation process and
devise a method which requires O(n) operations per time step even in the presence of
many closed loops. Specifically, we propose using a sequential regularization method
(SRM) [4, 5] for this purpose, combined with an explicit time integration scheme. The
method produces iterates which get arbitrarily close to the solution of the discretized
differential system, and it also handles certain types of constraint singularity.
The mathematical modeling of constrained multibody systems yields differential-algebraic
equations (DAEs) of index 2 or 3 [9, 11]. For tree-structured systems (i.e.
no closed loops), one can formulate the model in terms of a minimal set of relative
solution coordinates, obtaining a system of ordinary differential equations (ODEs),
see e.g. [14, 10]. Some existing commercial software packages (e.g. SD/FAST 1 ) utilize
this approach. Forward dynamics algorithms of complexity O(n) can be interpreted
then as imbedding the ODE in a DAE, at a given time, and eliminating some of the
unknowns locally in the larger but sparser system [6].
In this work, however, we consider the equations of motion in descriptor form (see,
e.g., [9, 14]). These are formulations in non-minimal (redundant) sets of coordinates
which yield an often simpler, albeit larger, DAE, even in the tree-structured case.
This DAE is typically treated by differentiating the constraints to the acceleration
level and using one of the well-known stabilization techniques (see, e.g. [2, 3]). An
O(n) forward dynamics algorithm for this formulation is recalled in x2, following [18].
A system with closed loops may now be considered as being composed of a tree-structured
system plus a set of loop-closing constraints. The latter are treated using
an SRM technique. The method is described in x3, where we prove that the number
of operations needed per time step when using an explicit time discretization scheme
1 SD/FAST is a trademark of Symbolic Dynamics, Inc., 561 Bush Street, Mountain View, CA
USA.
remains O(n), for any m - n. The method also handles certain types of constraint
singularities.
Finally, in x4 we demonstrate our algorithm on two closed-loop chain examples.
Algorithms with optimal computational complexity
for tree-structured problems
2.1 General multibody systems
Consider an idealized multibody system consisting of rigid bodies and point masses
with a kinematic tree-structure. We use redundant, world coordinates p for the positions
of the system, i.e. for describing the position and orientation of each individual
body. The set of feasible positions, which correspond to physically possible geometric
configurations, is given by holonomic constraints,
Differentiations of the constraints with respect to time yield corresponding constraints
for the velocities -
and the accelerations -
Here G(p) is the constraint Jacobian matrix g 0 (p). The Euler-Lagrange equations are
In combination with the constraint equations at the acceleration level (2.3) the Euler-Lagrange
equations form a DAE of index 1 for the differential variables p, v (= -
and the algebraic variables -:
The constraints on the position level (2.1) and on the velocity level
define an invariant manifold for (2.5).
It is well-known that simply simulating (2.5) numerically may cause severe drift
off the constraints manifold, manifesting a mild instability in this formulation. This
Figure
2.1: An example of tree-structured chains
phenomenon can be prevented by using stabilization techniques (e.g. invariant stabi-
lization, Baumgarte's stabilization and projection methods [2, 3, 7]) or regularization
techniques (e.g. penalty and sequential regularization methods [17, 19, 12, 15, 16,
4, 5, 20]) during the numerical integration. In this paper, we assume that such
a stabilization technique has already been applied and is included in (2.5), and we
concentrate on aspects of solving this system.
2.2 Algorithms for tree-structured problems
Consider the case of a multibody system with a kinematic tree structure. As in [18],
consider a graph whose nodes correspond to the joints and whose edges represent the
bodies in the system (see, e.g. Figure 2.1). When there are no closed loops, the graph
consists of trees. In each tree one joint is singled out as the root. Every other joint
then has a unique father in the tree, which is its neighboring body on the path to
the root. We introduce a node "0" as the father of the roots (if there is a fixed joint,
we usually number it as "0"). This gives one tree, and we label its nodes as follows:
joints are numbered from 1 to n such that the label of a joint is always greater than
that of its father. Bodies (links) are numbered such that a body connecting a father
joint and a son joint has the number of the son. For example, in Figure 2.1, joint
is the father of joint 7, and these two joints are connected by body 7.
For this kind of tree-structured systems, the mass matrix M is symmetric positive
definite and block-diagnal with blocks which are symmetric
positive definite. The constraint equations in (2.3) of a body connecting two joints k
father of k are of the form (see [18]):
where
is assumed to have full row rank
for each k. Then, as analyzed by many articles (see, e.g. [10, 13, 18, 21]), a recursive
algorithm can be derived to obtain an O(n) algorithm to invert the left-hand matrix
in (2.5b).
To explain the algorithm, we start at a terminal joint i.e. one which has no
Setting -
, we can obtain
Next, consider the equations for a joint j which has terminal sons k:
k=sons of j
father of j. For each son, (2.9) holds. Substitution into (2.10) yields:
where
k=sons of j
k=sons of j
The system (2.11) has the same form as (2.8), so a recursive algorithm results:
Algorithm
step 1: Climb down the tree (towards the root) recursively, repeatedly forming
f j by (2.12).
step 2: Starting from the root of the tree, solve each local system (2.11) for u j
(with climbing up the tree recursively.
Note that in Step 1, we replace the original problem by n local problems (2.11).
This algorithm has O(n) complexity because all operations are local for one body
and we only climb down and up the tree once. The work for different subtrees can
be done in parallel as well.
The result of the forward dynamics algorithm just described is an expression for
v in terms of
and p. This is an ODE system that must be integrated in
time. Note that the dimension of each of u; v and p in 3D is 6n. This is contrasted
with the treatment using minimal coordinates, where u; v or p may have dimension
as small as n. However, the saving in the latter formulation is chiefly in the time
advancement, and in a faster forward dynamics algorithms for small n, but not in the
O(n) forward dynamics algorithm (see [6]).
3 Algorithms for systems with many closed loops
We consider now multibody systems with loops. The problem can be seen as a tree-structured
system plus some extra loop-closing constraints in the form of
where x k and x j represent the generalized positions of joints k and j, respectively,
and are components of p, and m represents the number of loops. We can write (3.1)
as
where D is an m\Thetan constant matrix which has a full row rank. In the closed-loop case,
we need to invert a matrix -
instead of the matrix
in (2.5b), where -
(D 0). Block Gaussian elimination gives
where the Schur complement -
D T can be obtained using the O(n) algorithm
described in the last section. Using usual techniques for inverting -
, an O(n+
results which is satisfactory when m is small (cf. [18]).
For problems with a large number of loops, it appears difficult to extend this
O(n) algorithm (cf. [10]). In particular, no O(m) algorithm for inverting the Schur
complement is available. Also, we have found no discussion in the literature about
the performance of the method when the constraint Jacobian matrix
is rank
deficient. This situation often happens in closed-loop chains, for example the slider-crank
problem [4, 5] and the first example in x4. The sequential regularization method
proposed in [4, 5] provides a possibility to address these two challenges.
3.1 The algorithm
We now write down the Euler-Lagrange equations for a multibody system with loops
as
(constraints corresponding to a tree structure) ; (3.3c)
where the number of rows of D may be large. We assume that G and D have full
rank. This assumption is generally true for chain problems (see examples in x4). We
already have an O(n) algorithm for the tree-structured problem, i.e. for the system
(3.3a), (3.3b) (without D T -) and (3.3c). So, to develop an O(n) algorithm for the
tree-structured part of the problem we keep the structure of (3.3a), (3.3b) and (3.3c).
For the loop-closing constraints, one possibility is a penalty method:
and
is the penalty parameter. However, it is well known that we have
to use implicit integration schemes for a system like (3.4) since ffl has to be very
small and then (3.4) is a stiff system. Therefore we have to invert a matrix like
for a discretization stepsize h
D T D is
not block diagonal any more, the previous algorithm will be difficult to apply.
In [4, 5] we proposed a new, sequential regularization method (SRM), which is
a modification of the usual penalty method and allows us to use explicit schemes to
solve the regularized problem. Hence, SRM combined with explicit time discretization
makes it possible to obtain an O(n) algorithm for problems with many loops.
Applying the SRM to problem (3.3) to treat the loop-closing constraints, we obtain
a new algorithm: given - 0 (t), for such that
and
If we apply a stabilization method (e.g. Baumgarte's stabilization [7], or the
first stage of the invariant stabilization [2, 3], or projection methods [2]) first for the
constraints (3.3c), then the following system must be considered,
and (3.6c) becomes:
An even simpler algorithm was suggested in [5] for general multibody systems
without singularities. For (3.3), it has the form:
and
This form has O(n) computational complexity and it is simpler than the form (3.6)
since we only need to invert the block diagonal matrix M . However, for problems
with singularities (or ill-conditioning) which often appear in closed-loop chains, the
form (3.10) is not recommended, as we indicated already in [5]. In x4, a numerical
example shows that its performance is worse than that of the form (3.6). Also, it
is indicated in [5] that for the form (3.10) the best choice of ffl is generally ch (for
the reason of stability of the explicit difference schemes), where h is the step size
of the chosen difference scheme and c is a constant dependent on the eigenvalues of
the matrix
G
G
. Hence, the best choice of ffl for this form changes in
time because G is in general dependent on the time t. We will see an example in x4
where ffl has to be chosen to be quite large (hence more SRM iterations are needed)
to make the explicit discretization stable for the whole time interval that we compute
on. Therefore we still recommend the form (3.6) since it performs better as shown
in x4 and since the best choice of ffl for this form depends only on the eigenvalues of
the matrix DM (which is constant if M is a constant matrix, as is the case for
many chain problems, including the examples in x4). That is, the best choice of ffl is
often independent of t for the form (3.6). In comparison with the usual stabilization
methods, an additional advantage of the from (3.6) is that we never invert a singular
matrix even if
is singular, since G has full rank as we assumed.
To summarize, our algorithm consists of applying an explicit time discretization
scheme and the O(n) forward dynamics algorithm for the underlying tree-structured
system to solve for
where
Next we discuss the convergence of the SRM and prove that the iteration number
of the method is independent of the body count n. Hence, per time step our method
is an O(n) algorithm even for chains with many closed loops.
3.2 Convergence of the stabilization-SRM form
Now we want to analyze the convergence of the iterative procedure (3.12). The
method of analysis is based on that in [5]. The differences here are that we only
apply the SRM to part of the constraints (i.e. the loop-closing constraints (3.1)) and
that the problem we consider may have a large number of unknowns (linear in n).
As indicated in [4, 5], the system (3.12) is clearly singularly perturbed for
ffl - 1. Starting from an arbitrary - 0 (t) we may therefore expect an initial layer. For
simplicity of the convergence analysis we assume (as we did in [4, 5])
Assumption 3.1
For initial value problems it is possible to obtain the exact - j (0);
in advance [4, 5]. For a general semi-explicit DAE, under this assumption there are
no initial layers for the solutions of (3.12) up to the lth derivatives.
Now let us make assumptions on the boundedness of the solution. For an n-body
chain, it is reasonable to assume that the solution and its derivatives are bounded
linearly in n. In other cases the solution may be bounded independently of n. We
make corresponding assumptions on p s , v s and - s since our regularization is a nearby
problem to the original system and there will be no initial layers involved in the
solution under the condition (3.13) (letting l - 0).
Assumption 3.2 The solutions p s , v s and - s of (3.12) satisfy
where - may depend on n. From (3.12b) and (3.12c) we can solve for - s in terms of
i.e. we can write
We further assume that the Jacobian matrix of with respect to p s , v s and - s satisfies
If - s also satisfies (3.14) then (3.14) and (3.16) imply that
represents the maximum norm and K is a generic positive constant
independent of n.
In Remark 3.1 below we give convergence bounds for the case where we only assume
We have noted that the mass matrix M is positive definite and block-diagonal
and that the closing-loop constraint matrix D is block-sparse. From the examples in
x4, we can see that the Jacobian matrix G for the constraints (2.1) is block-sparse
too. We thus assume:
Assumption 3.3 The matrices M , D and G and their finite multiplications (i.e. the
number of multiplications is independent of n) are all block-sparse. More precisely, if
we multiply such a matrix and a vector with norm O(n k ), then the product norm is
still O(n k ), where k is any positive number.
In fact the closing-loop constraints matrix D is not only block-sparse but also block-
row-orthogonal. More concretely, there is at most one nonzero element in each column
(see, e.g. (4.6) in x4). Combining this with the properties of M we thus obtain that
DM positive definite and block-diagonal. Hence the eigenvalues of the matrix
DM are the union of the eigenvalues of diagonal blocks of the matrix. Thus
all these eigenvalues are positive and independent of n since the size and elements of
diagonal blocks of DM are independent of n. We write these as a lemma:
Lemma 3.1 If the mass matrix M and the closing-loop constraint matrix D have
the above properties then the eigenvalues of the matrix DM \Gamma1 D T are all positive and
independent of n.
We finally assume the following perturbation bound (cf. the perturbation index
[11] and corresponding discussion in [5]).
Assumption 3.4 Let -
. Then there exists a
constant K independent of n such that:
where z is the solution of (3.8) and - z satisfies the perturbed (3.8):
Here, although inverting
needs O(n) operations (see the algorithm in
x2), these operations will not involve the perturbations '(t) and ffi(t). So we believe
the bound K in (3.18) to be independent of n.
Now we can consider the convergence of the stabilization-SRM form (3.12). We
choose - 0 such that - 0 and -
- 0 are of O(-). Let u . Then we
have drift equations:
or
with the initial conditions u s Applying (3.21) for
using the Assumptions and Lemma 3.1 we obtain u
(3.21a) further yields
Comparing (3.19) with the
stabilization-SRM form, we need to bound
and their derivatives appearing in (3.18). We already have that
From (3.21a) we obtain -
Using the condition - 0
DM
can be obtained from (3.12b) and (3.12d). So, from (3.21b), -
Differentiating (3.21b) we have
where we use - O(-) obtained from (3.12d) and w
obtained from (3.17). Hence -
Differentiating (3.21a) we next have
This implies
We thus use (3.18) and obtain the desired conclusion for
Subtracting (3.12a), (3.12b) from (3.8a), (3.8b), respectively, and using (3.22) and
(3.16) we have
Also, by differentiating (3.12d), we conclude that -
1 is of O(-).
For first have as for the
and hence also
The estimate (3.23) also holds for 2. This yields that the right hand side of
(3.21b) is O(-ffl 2 ), so
Now we want to get a better estimate for -
Differentiating (3.21b) we have
and -
obtained as for the
We want to show that -
O(-ffl). For this purpose we must estimate -
Using
the condition -
-(0), and the fact that
are also exact at
0, we can obtain
Hence -
Differentiating (3.24) again we now obtain precisely as
when estimating -
Here we need to use -
which can be obtained from a differentiation
of (3.12), applications of (3.14), (3.16) and (3.17) as well as the estimates for -
obtained above. Noting
we thus have
Our previous estimates allow the conclusion that
hence we can conclude that -
Repeating this procedure, we can finally obtain:
Theorem 3.1 Let all the assumptions described at the beginning of this section hold.
Then, for the solution of the stabilization-SRM form (3.12) with - 0 and -
by O(-), we have the following error estimates:
is the solution of the multibody
problem (3.3).
Remark 3.1 If the bounds in (3.14) are only assumed for p s , v s and - s , then we may
generally obtain -
and the j-th derivatives would be O(- j+1 ). This
finally yields the following error estimates:
This result is obviously weaker than (3.27), corresponding to weaker boundedness assumptions
3.3 Computational complexity of the stabilization-SRM form
Consider our algorithm for (3.12). At each regularization iteration we can use the
O(n) algorithm described in x2. So, to consider the computational complexity of our
algorithm we need only study the number of iterations s required.
It is simple to show that s is independent of n. Given the worst error estimate
(3.28), we must choose ffl small enough so that
Then each SRM iteration reduces the error, viz. the difference between the solution
of (3.12) and the solution of (3.3), by at least a factor ff, and so a fixed number of
iterations s, independent of n, is needed to reduce this error below any given tolerance.
Note, though, that - may grow with n, depending on the problem being simulated.
Hence the range of choices for ffl (ffl - ff=-) is restricted depending on n. Since the
time discretization scheme is explicit, the step size h must be restricted by absolute
stability requirements to satisfy
for an appropriate constant fl of moderate size. The number of time steps required
may therefore depend on n, too (cf. [21]) 2 . Still, the number of iterations s required
to obtain a given accuracy obviously remains independent of n, hence operation count
per time step remains O(n).
For the worst error estimate (3.28) suppose that
where fi is a given positive constant, simplicity. Now
let us apply an r-th order explicit difference scheme to the stabilization-SRM form.
At the s-th iteration, the worst combined error for our algorithm is O(- s ffl s
(see Remark 3.1). Trying to roughly equate the two sources of error, we set
hence
Using (3.29) we then obtain a rough upper bound for the number of iterations s:
A requirement such as likely to arise also from accuracy, not only stability
considerations.
Remark 3.2 For the error estimate (3.27) the condition (3.29) can be weakened
significantly to read
where s 0 is an arbitrary integer independent of n. In this case, (3.30) is replaced by
4 Numerical experiments
We now present a couple of examples to demonstrate the algorithm that was proposed
and analyzed in previous sections. At first, we build up the system for a special kind
of n-body chains (see Figure 2.1) which include our two examples. We use the method
described in [21].
Consider a chain consisting of n bodies. Each body is modeled as a line segment
of length l j and mass m j , with uniform mass distribution. We choose Cartesian
coordinates of the joints, x j , and the vectors connecting the joints along the links,
in order to describe the position p of the chain:
n. For 3-D chains, p j has six components and for 2-D chains
four components. The labeling of the chain joints has been shown in Figure 2.1 and
explained in x2. Hence the holonomic constraints include length conditions
and connection conditions
father of j. Therefore
Due to the uniform mass distribution, the center of mass of each body coincides with
its geometric center, x and the moment of inertia about the center of mass
is . The total kinetic energy is given by
I
3 I
where
I
Figure
4.1: A square chain and its labeling
and we let the joint "0" is fixed. The potential energy due to gravity is given by
ge T
Here e v is the unit vector along the vertical axis. For the 2-D case e
for the 3-D case e . Note that e T
is the height of the center
of mass of body j above zero level, and -
is the gravity constant. This
gives the force vector
f =B @
Hence, we know the mass matrix M , the force term f and the constraints g and
can write down the system (2.5) for this kind of tree-structured chain problems. For
chains with loops, we only need to impose some additional geometric constraints onto
their corresponding tree-structured formulation (see the discussion at the beginning
of x3).
Next we consider two specific examples.
Example 4.1 Consider a 2D square chain with unit length and mass for each link
(i.e. 1). The labeling of a corresponding tree structure is shown in Figure
4.1. Under this labeling,
Hence we can write down the Jacobian matrix G(p) for the tree-structure constraints
g(p). For example, when
I \GammaI
and
Here I is a 2 \Theta 2 unit matrix. The extra loop-closing constraints d(p) are:
where m is equal to the number of squares. We thus can form its Jacobian matrix D.
For example, when
The Jacobian matrix (G(p) T ; D T ) T of all constraints has rank deficiency when all four
links of a square are on a line. We let the chain fall freely from the position shown
in
Figure
4.1 where the joint "0" is fixed. As we have mentioned before, for each
SRM iteration our algorithm (3.12) solves a tree-structured problem using the O(n)
algorithm of x2. Here, we only demonstrate that the number of SRM iterations is
independent of n. This means that the computational complexity of our algorithm
is O(n) per time step. We choose step size regularization parameter
apply an explicit second-order Runge-Kutta method to the regularized
problem at each iteration. We do the computations for because to clearly see a
relation between the number of iterations and n we want to avoid the singularity which
happens around and after whose error situation is very complicated. We
count the number of SRM iterations until the errors in the constraints do not exhibit
obvious improvement. Table 4.1 lists iteration counts and constraint errors for various
n.
# of
iterations
# of
iterations 3 3 3 3 3 3 3
Table
4.1: Relation between the number of iterations and n for the square chain
Theoretically we expect that two SRM iterations be sufficient since the difference
scheme is of second order and O(h). From the table we see that at the second
and the third successive iterations the maximum drifts are almost the same, especially
when n - 1=h. Additional experiments with
the need of only two iterations for the algorithm (3.12) to achieve the second order
discrete accuracy.
Next we compare the performance of the algorithms (3.12) (AlgI) and (3.10) (Al-
gII). We set computational results show that the first singularity occurs
after We use the simple forward Euler scheme and take
for both algorithms. We can still take for the algorithm (3.12). But for the
algorithm (3.10) we cannot take :5h. The algorithm is unstable immediately after
the first time step when we take :005. The algorithm blows up around
when we take It becomes stable when we take This agrees with
our expectation about the algorithm in x3.1. That is, for the sake of stability of the
difference scheme the smallest ffl we can choose depends on t and it is often larger
than that needed for the algorithm (3.12). Hence, more iterations are needed for the
simpler algorithm (3.10).
algorithm # of iterations ffl t=.4 t=.5 t=.6 t=1.0
AlgI 1 .005 7.763e-3 1.289e-2 1.744e-2 2.959e-2
AlgI 2 .05 7.763e-3 1.289e-2 1.744e-2 3.107e-2
AlgII 2 .05 1.008e-1 1.640e-1 2.270e-1 4.794e-1
AlgII 4 .05 5.869e-2 8.514e-2 1.129e-1 1.155e-1
Table
4.2: Maximum drifts of two algorithms
We list in Table 4.2 the maximum drifts produced by these two algorithms at
various times. From the table we see that the overall performance of AlgI is better
than that of AlgII. Also it seems that for this example the error improvement of AlgII
by iteration is much slower. The motion of this square chain with
to 2:2 is described in Figure 4.2.
Example 4.2 In this example we consider the motion of a square net (veil) started
by a breeze. The corresponding tree structure of the net and its labeling are shown in
Figure
4.3, where are fixed joints. Under this labeling,
we can determine the father of any given joint and then the Jacobian matrix G(p) for
the constraints The extra loop-closing constraints are:
ae x
For this square net the number of bodies l. Again we set the step size
and the regularization parameter 0:5h, and apply the O(n) algorithm at
each time step of the second-order Runge-Kutta method for the regularized problem at
each SRM iteration (3.12). After a breeze we assume that the net moves to the right
with the largest angle -to the verticle axis. We consider the motion of a net whose
initial position is located at where it has the angle -with respect to the vertical line.
We list the number of iterations for various n at in Table 4.3.
From the table we do not see drift error improvement after the second iteration.
only two iterations are again needed for the algorithm (3.12), independent of n.
The motion of this square chain with
described in Figure 4.4.
Figure
4.2: Motion of the square chain. Time increases in increments \Deltat = :2 from
left to right, top to bottom.
2l
4l
2l +3
2l +4
2(l +l)-3
2(l +l)-2
2(l +l)
2l +l
l =(w-1)l
l =(w-2)l
Figure
4.3: Tree structure and its labeling of the square net
# of
iterations
# of
iterations 3 3 3 3 3 3 3
Table
4.3: Relation between the number of iterations and n for the square net
--R
Recent Advances in the Numerical Integration of Multibody Systems
Stabilization of invariants of discretized differential systems
Stabilization of DAEs and invariant manifolds
Sequential regularization methods for higher index DAEs with constraint singularities: Linear index-2 case
Sequential regularization methods for nonlinear higher index DAEs
formulation stiffness in robot simulation
Stabilization of constraints and integrals of motion in dynamical systems
Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations
Robot Dynamics Algorithms
Solving Ordinary Differential Equations II
Regularization of differential
Unified formulaion of dynamics for serial rigid multibody systems
On a penalty function method for the simulation of mechanical systems subject to constraints
Stabilization of computational procedures for constrained dynamical systems
Numerical solution of differential-algebraic equations with ill-conditioned constraints
Fast recursive SQP methods for large-scale optimal control problems
--TR | robot simulation;stabilization;regularization;constraint singularities;differential-algebraic equations;multibody systems;higher index |
344992 | The Procrustes Problem for Orthogonal Stiefel Matrices. | In this paper we consider the Procrustes problem on the manifold of orthogonal Stiefel matrices. Given matrices ${\cal A}\in {\Bbb R}^{m\times k},$ ${\cal B}\in {\Bbb R}^{m\times p},$ $m\ge p \ge k,$ we seek the minimum of $\|{\cal A}-{\cal B}Q\|^2$ for all matrices $Q\in {\Bbb R}^{p\times k},$ $Q^TQ=I_{k\times k}$. We introduce a class of relaxation methods for generating sequences of approximations to a minimizer and offer a geometric interpretation of these methods. Results of numerical experiments illustrating the convergence of the methods are given. | Introduction
We begin by defining the set OSt(p; k) of orthogonal Stiefel matrices:
which is a compact submanifold of dimension
of the manifold
O(p) of all p \Theta p orthogonal matrices which has dimension 1
(trace A T
2 denote the standard Frobenius norm in R m\Thetak .
The Procrustes problem for orthogonal Stiefel matrices is to minimize
for all Q 2 OSt(p; k).
Problem (1.2) can be simplified by performing the singular value decomposition
of the matrix B 2 R m\Thetap . Let
where ~
Due to the fact that the last rows of
are zeros we will simplify (1.3) by introducing new notations. We denote
assuming from now on that all oe
A 2 R p\Thetak to be a matrix composed of the first p rows of ~
A. Consequently the
Procrustes minimization on the set of orthogonal Stiefel matrices is :
For given A 2 R p\Thetak and diagonal \Sigma 2 R p\Thetap minimize
for all Q 2 OSt(p; k).
A. W. BOJANCZYK AND A. LUTOBORSKI
The original formulations of the Procrustes problem can be found in [1], [2]. We
may write (1.4) explicitly as
The Procrustes problem has been solved analytically in the orthogonal case when
see [8]. In this case Q 2 O(p) and we have
Provided that the singular value decomposition of \SigmaA is the minimizer
in (1.6) is then
The functional P[A; \Sigma] in (1.5) is a sum of two functionals in Q : the bilinear
functional trace(Q T \Sigma 2 Q) and the linear functional \Gamma2trace(Q T \SigmaA). It is well
known how to minimize each of the functionals separately.
The minimum value of the bilinear functional is equal to the sum of squares of
the k smallest diagonal entries of \Sigma. This result is due to Ky Fan, [9]. The linear
functional is minimized when trace(Q T \SigmaA) is maximized. The maximum of this
trace is given by the sum of the singular values of the matrix \Sigma T A. This upper
bound on the trace functional has been established by J.Von Neumann in [13], see
also [8].
Separate minimization of the quadratic and the linear part are well understood
since we know both the analytical solutions and robust numerical methods. The
analytical solution of the orthogonal Procrustes problem for Stiefel matrices is not
known to the best of our knowledge and constitutes a major challenge.
It will be useful to interpret the minimization (1.4) geometrically. To do that we
define an eccentric Stiefel manifold OSt[\Sigma](p;
The eccentric Stiefel manifold OSt[\Sigma](p; k) is an image of the orthogonal Stiefel
manifold OSt(p; under the linear mapping Q \Gamma! \SigmaQ. The image of a sphere
kg of radius
k in R p\Thetak under this mapping is an ellipsoid in
R p\Thetak of which OSt[\Sigma](p; k) is a subset. The eccentric Stiefel manifold is a compact
set contained in a larger ball in R p\Thetak centered at 0 and of radius
We note that OSt[\Sigma](p; 1) is a standard ellipsoid in R
and OSt[I](p;
Therefore if
min
then a point \SigmaQ is the projection of A onto the eccentric Stiefel manifold OSt[\Sigma](p; k).
Due to the compactness of the manifold a projection \SigmaQ exists. The big difficulty
which we face in the task of computing the minimizer Q is the fact that the
manifold OSt[\Sigma](p; k) is not a convex set.
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 3
2. Notations
Elementary plane rotation by an angle OE is represented by
sin OE cos OE
Elementary plane reflection about the line with slope tan OE is
' cos OE sin OE
sin
For we introduce the following submatrices of
consists of the m-th row of Q
consists of the m and n-th rows of Q
consists of the rows complementary to Q [m;n]
consists of the entries on the intersections of the
m-th and n-th rows and columns of Q:
A plane rotation by an angle OE in the (k; l)-plane in R p is represented by a matrix
G k;l (OE) such that
G k;l (OE) [k;l]
A plane reflection R k;l (OE) in the (k; l)-plane is defined similarily by means of R(OE).
J k;l (p) is the set of all plane rotations and reflections in the (k; l)-plane. J (p) is
the set of plane rotations and reflections in all planes. Clearly
J k;l (p) ae J (p) ae O(p)
3. Relaxation methods for the Procrustes problem
The Stiefel manifold OSt(p; k) is the admissible set for the minimizer of the
functional P in (1.4). This manifold however is not a vector space which poses
severe restrictions on how the succesive approximations can be obtained from the
previous ones. Additive corrections are not admissible, but the Stiefel manifold
is closed with respect to left multiplication by an orthogonal matrix R 2 O(p).
Thus RQ, where Q 2 OSt(p; k), is an admissible approximation. Consequently, we
restrict our considerations to a class of minimization methods which construct the
approximations -
Q to the minimizer Q by the rule
where Q and -
respectively the current and the next approximations to the
minimizer.
In what follows we will consider only relaxation minimization methods which
seek for the minimizer of the functional P, according to (3.1) with
and M is the dimension of the manifold OSt(p; k). Each R i 2 O(p),
depends on a single parameter whose value results from a scalar
minimization problem. We will refer to the left multiplication by R in (3.1) as to
a sweep. Our relaxation method consists of repeated applications of sweeps which
produce a minimizing sequence for the problem (1.4).
4 A. W. BOJANCZYK AND A. LUTOBORSKI
We will choose matrices R i to be orthogonally similar to a plane rotation or
reflection. Different choices of similarities will lead to different relaxation methods.
We set
and define
may depend on the current approximation to the
. It is the choice of P i that fully determines the relaxation method
(3.1),(3.3),(3.4). The selection of the parameter ff in (3.4) will result from the scalar
minimization
ff
The matrix R i can be viewed as a plane rotation or reflection in a plane spanned
by a pair of columns of the matrix P i . The indices (r; s) of this pair of columns
are selected according to an ordering N of a set of pairs D. The ordering
is an bijection, where D oe
This inclusion guarantees that D contains at least M distinct pairs necessary to
construct an arbitrary Q 2 OSt(p; k) as a product of matrices R i .
It is clear that relaxation methods satisfying (3.5) will always produce a nonincreasing
sequence of the values P[A; \Sigma](Q i ).
If I p\Thetap in (3.4), then R and the sweep (3.1) has the following
particularly simple form
The relaxation method defined by (3.6) will be refered to as a left-sided relaxation
method or LSRM.
If
i is the orthogonal complement of Q i , then
and hence
I k\Thetak
Thus by induction the sweep (3.1) has the form
I k\Thetak
The relaxation method defined by (3.7) will be refered to as a right-sided relaxation
method or RSRM.
A specific type of a right-sided relaxation method was investigated by H. Park
in [11]. The method in [11] is based on the concepts discussed earlier by Ten Berge
and Knol [2] where the Procrustes problem for orthogonal Stiefel matrices, called
there unbalanced problem, is solved by means of iteratively solving a sequence of orthogonal
Procrustes problems, called balanced problems. The relaxation approach
new left-sided relaxation method which is the topic of our paper.
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 5
Another interesting aspect of this approach is a clear geometric interpretation of
the relaxation step. For the study of other minimization methods on submanifolds
of spaces of matrices see [10] and [12] .
4. Planar Procrustes problem
We will now present the left-sided relaxation method. Without loss of generality
let us assume that the planes (r; s) in which transformations operate are chosen in
the row cyclic order, in the way analogous to that used in the cyclic Jacobi method
for the SVD computation, see [5].
In this case N : D
is given by N
Q in (3.6) has the following form
r
Y
where J r;s 2 J r;s (p). Let Q (r;s) be the current approximation in the sweep.
The next approximation to the minimizer is Q . The selection of the
parameter ff results from the scalar minimization
Our main goal now is to show how to find ff in (4.2).
Consider the functional J \Gamma! (4.2), where for simplicity of
notation we omitted all indices. Without loss of generality we assume that N
diag
where G(ff) is a plane rotation (the case of reflection is similar and can be treated
in a completely analogous way). The minimization in (4.2) is precisely the minimization
of
' a 11 a 12 \Delta \Delta \Delta a 1k
a 21 a 22 \Delta \Delta \Delta a 2k
G(ff)
Let UQ \GammaV T
Q be the SVD decomposition of Q [1;2] such that
. Note that the last columns of the matrix B in (4.4)
are always approximated by zero columns, and thus the minimization of f(ff) is
equivalent to minimization restricted to the first two columns, that is
'' c \Gammas
6 A. W. BOJANCZYK AND A. LUTOBORSKI
is the angle of the plane rotation ( or
reflection ) UQ .
We may write (4.5) explicitly as
We now denote
By completing the squares we may represent
F (OE) in the following form:
sin OE
for . Thus the minimization of the functional (4.9) is equivalent
to the following problem, For given C 2 R 2\Theta1 and diagonal Z 2 R 2\Theta2 minimize
for all q 2 OSt(2; 1),
The minimization problem of the type (4.10) will be called a planar Procrustes
problem. Such problem has to be solved on each step of our relaxation method and
is geometrically equivalent to projecting C onto an ellipse. In the next section we
consider two different iterative methods for finding the projection.
5. Projection on an ellipse
The geometrical formulation of the planar Procrustes problem (4.10) is very
simple. Given a point C and an ellipse
ae
oe
in R 2 we want to find a point which is a
projection of C onto E .
This can be achieved in a variety of ways. We describe the classical projection of
a point onto an ellipse due to Appolonius leading to a scalar fourth order algebraic
equation and a method of iterated reflections based on the reflection property of
the ellipse.
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 7
5.1. The hyperbola of Appolonius. Recall the construction of a normal to an
ellipse from a point, see [14], due to Appolonius. With the given ellipse E in (5.1)
and with the point C we associate the hyperbola H in the following way. The
equation of the normal to the ellipse at
z
z
Clearly CS is the normal to E iff the point
Equivalently CS is the normal to E iff the point S satisfies
z
for . (5.3) is a quadratic, in x 1 and x 2 , equation of the
hyperbola H which can be written as
is the center of H with coordinates
F
Figure
1. Hyperbola of Appolonius
The hyperbola H has asymptotes parallel to the axes of the ellipse and passes
through the origin and the point C. H degenerates to a line when the point C is
on one of the axes of the ellipse. This nongeneric, trivial case will not be analyzed
here. H can also be characterized as a locus of the centers of the conics in the
pencil generated by the ellipse and an arbitrary circle centered at C .
To find the coordinates of the projection point S we have to intersect the hyperbola
of Appolonius with the ellipse that is to solve a system of two quadratic
equations (5.1) and (5.4). Using (5.4) to eliminate x 2 from (5.1) we obtain a fourth
8 A. W. BOJANCZYK AND A. LUTOBORSKI
order polynomial equation in x 1
For any specific numerical values of the coefficients this equation can be easily
solved symbolically. A simpler, purely numerical alternative is to solve the system
using Newton's method.
Another alternative is to reduce the system to a scalar equation. Assume that
is in the first quadrant and that
S be the projection of
setting sin OE) in (5.3) and next substituting
leads to the equation t, where
It is easy to see that
and since then the function g(t) is convex and
has one positive root. It can also be seen that for
we have g(t method starting from the initial approximation
will generate a decreasing, convergent sequence of approximations to the root of
5.2. Iterated reflections. Assume, as before, that C is in the first quadrant and
that
Fig.2. Every other case reduces to this one through reflections.
Let
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 9
F
R
Figure
2. Iterated reflections
The reflection property of E says that (for C both inside and outside E) S is
characterized by :
Let , as on Figure 2, be given by
sin
where for C outside
tan OE
cos
(In the case C inside E , the coordinates of the points L and R can be computed
similarly).
Analogously as in the bisection method, we compute
(an intermediate point between L and R) from L and
R by setting OE
(where h \Delta; \Delta i denotes the inner product), then OE R - OE - OE M and we set L
so that OE hold we set L
We thus construct a sequence fLng ae E such that lim n!1
5.3. Some remarks on the general and planar Procrustes problems. The
planar Procrustes problem has several features which the general problem (1.4) of
projection onto the eccentric Stiefel manifold does not posses.
ffl E has the reflection properties and Appolonius normal.
The reflection properties of the ellipse do not extend to the eccentric Stiefel
manifold and in particular not even to ellipsoids in R p . The construction of an
Appolonius normal to the ellipse based on the orthogonality of the ellipse and an
A. W. BOJANCZYK AND A. LUTOBORSKI
associated hyperbola which results in a scalar equation (5.6) is also particular to
the planar problem. As a result in the case p ? k ? 1 our relaxation step, which
amounts to solving a planar Procrustes problem, cannot be directly generalized to
a higher dimensional problem.
ffl A point not belonging to E has either a unique projection onto E or a finite
number of projections.
Hence if
conv(E) then there exits a unique projection S of C onto E characterized
by
for all F 2 conv(E). A point C inside the ellipse E has a non-unique projection
C is on the major axis between points B and \GammaB. Various locations
of C and its projection(s) S are shown on Fig. 3. along with the upper part of
the evolute of the ellipse which is the locus of the centers of curvature of E . The
number of normals 2,3 or 4 that can be drawn from C depends on the position of
C relative to the evolute.
F
Figure
3. Points in the first quadrant and their projections on the ellipse.
The non-uniqueness of S for C on the segment [\GammaB; B] is reflected in non-
differentiability of the function shown on Fig. 4. In general
solutions to the Procrustes problem may form a submanifold of the Stiefel manifold.
Finally we observe that E cuts the plane into two components. However if
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 11
then C is on a sphere of radius
2 in R 3\Theta2 containing the Stiefel manifold OSt[I](3; 2)
but the point kC for any k 2 Rcannot be connected to 0 with a segment intersecting
OSt(3; 2).
Figure
4. Graph of
Both observations concerning the analytical problem are reflected in the computations
and have computational implications.
6. Geometric Interpretation of Left and Right Relaxation Methods
Since the notion of the standard ellipsoid OSt[\Sigma](p; 1) in R p is very intuitive we
will now interpret the minimization problem (1.11) treating matrices in R p\Thetak as k-tuples
of vecors in R p . Let be a given k-tuple of vectors in R p .
be the current approximation to the minimizer. Clearly
the points \Sigmaq i all belong to the ellipsoid OSt[\Sigma](p; 1). Thus the minimization of
P[A; \Sigma](Q) can be interpreted as finding points \Sigmaq
i on the ellipsoid, where q
are
orthonormal vectors, that best match, as measured by P[A; \Sigma](Q), the given vectors
a i in R p .
The relaxation method described in Section 3 can be interpreted as follows. Pick
an orthonormal basis in R p . In the next sweep rotate the current set of vectors q i
as a frame, in planes spanned by all pairs of the vectors from the current basis.
In the left-sided relaxation method the basis is the cannonical basis and is the
same for all sweeps. All relaxation steps are exactly the same, and all amount to
solving a planar Procrustes problem.
In the right-sided relaxation method the basis consists of two subsets and changes
from sweep to sweep. The first subset of the basis consists of the columns of the
current approximation Q and the second subset consists of the columns of the
orthogonal complement Q ? of Q.
Working only with the columns of Q is equivalent to the so-called balanced
Procrustes problem studied by Park in [11] which can be solved by means of an
A. W. BOJANCZYK AND A. LUTOBORSKI
SVD computation. The relaxation step in [11] for the balanced problem consists of
computing the SVD of the 2 \Theta 2 matrix (\Sigmaq r In our relaxation
setting, the relaxation step in the right-sided relaxation method requires solving
the scalar minimization problem (3.5),
min
which leads to a linear equation in tangent of ff and is equivalent to the 2 \Theta 2 SVD
computation in [11]. Each of these 2 \Theta 2 steps is a rotation of vectors \Sigmaq r and
\Sigmaq s in the plane spanned by q r and q s so the rotated vectors on the ellipsoid best
approximate the two given vectors a r and a s .
However, as the columns of Q do not span the whole space R p , it might happen
that
k g, and hence it might not be possible
to generate a sequence of approximations that will converge to Q . In order to
overcome this problem the matrix Q is extended by its orthogonal complement
subproblems in [11] involving vectors from both subsets are referred to
in [11] as unbalanced subproblems. These scalar minimizations have the following
min
c
s
That is, the unbalanced subproblem is to find a vector on the ellipsoid in the plane
spanned by q r and q s closest to the given vector a r . As the intersection of this plane
and the ellipsoid is an ellipse, the unbalanced subproblem can be expressed as a
planar Procrustes problem (4.10) and any of the algorithms discussed in Section 5
can be used to solve this unbalanced problem.
Other choices of bases may be possible but the choices leading to the left and
the right sided-relaxation methods seem to be the most natural.
7. Numerical experiments
In this section we present numerical experiments illustrating the behavior of
the left and the right relaxation methods discussed in Section 3. We will start by
summarizing the left and the right relaxation methods given below in pseudocodes.
Given A 2 R p\Thetak ,
construct of sequences of Stiefel matrices approximating the minimizer of (1.4).
Algorithm LSRM:
1. Initialization:
set Maxstep,
2. Iterate sweeps:
while (r threshold and n ! Maxstep
solve planar Procrustes problem
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 13Algorithm RSRM:
1. Initialization:
set Maxstep,
2. Iterate sweeps:
while (r threshold and n ! Maxstep
solve min ff
solve planar Procrustes problem
measure the cost of the two methods by the number of sweeps performed by
each of the two methods.
A sweep in the LSRM method consists of p(p+1)=2 planar Procrustes problems.
Each planar Procrustes problem requires computation of the SVD of a 2 \Theta k matrix.
This can be achieved by first computing the QR decomposition followed by a 2 \Theta 2
SVD problem. After the SVD is calculated, a projection on an ellipse has to be
determined. The cost of a sweep is approximately O(kp 2 ) floating point operations.
A sweep in the RSRM method consists of k(k computations of p \Theta 2 SVD
problems. In adition, there are k(p \Gamma Procrustes problems, each requiring
computation of the SVD of a p \Theta 2 matrix followed by computation of a projection
on an ellipse. Thus the cost of a sweep is again approximately O(kp 2 ) floating point
operations.
Surely, the precise cost of a sweep will depend on the number of iterations needed
for obtaining satisfactory projections on the resulting ellipses. For each projection,
this will depend on the location of the point being projected as well as the shape
of the ellipse. Computation of the projection will be most costly when the ellipse
is flat.
As can be seen, sweeps in the two methods may have different costs. However,
the number of sweeps performend by each of the methods will give some bases for
comparing the convergence behavior of the two methods.
We begin by illustrating the behavior of the LSRM method for finding Q in
the Procrustes problem with
The initial approximation is I 4\Theta2 . Some intermediate values of Q are listed
in
Table
1.
14 A. W. BOJANCZYK AND A. LUTOBORSKI
Table
1. Matrices Q in a minimizing sequence generated by LSRM.
We will now present comperative numerical results for the LSRM and RSMR
methods.
Recall that the functional P is a sum of a linear and a bilinear term. We will
consider classes of examples when the functional can be approximated by its linear
or the bilinear term.
In the first class of examples the linear term dominates the bilinear term or in
other words when jjAjj ?? jj\Sigmajj. We deal here with a perturbed linear functional.
The minimum of the functional P can be approximated by the sum of singular
values of \Sigma T A.
The second class of examples consists of cases when the quadratic term dominates
the linear term, that is when jjAjj !! jj\Sigmajj. We deal here with a perturbed bilinear
functional. Then the minimum value of the functional P can be approximated by
the sum of the k smallest singular values of \Sigma.
The third class of exampless consists of cases when the functional is quadratic,
that is when A - \SigmaQ for some Q 2 OSt(p; k). The minimum of the functional is
then close to zero.
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 15
In each class of examples we pick two different matrices \Sigma: one corresponding
to the ellipsoid being almost a sphere, that is when \Sigma - I, the other corresponding
to the ellipsoid being very flat in one or more planes, that is when oe 1
is large.
The algorithms were written in MATLAB 4.2 and run on an HP9000 workstation
with the machine relative precision
As the initial approximation we took I p\Thetak for the LSRM, and
I p\Thetap for the RSRM. The planar Procrustes solver used was based on
the hyperbola of Appolonius (the iterated reflections solver was giving numerically
equivalent results). Some representative results are shown in Tables 2-6.
RSRM LSRM
6
Table
2.
2.
RSRM LSRM
6
Table
3.
Table
2 illustrates the behavior of the two methods when the ellipsoid is almost
a sphere and when there exists Q such that A. That is the bilinear and
the linear terms are of comparable size. The experiments suggest that the LSRM
requires less sweeps to obtain a satisfactory approximation to the minimizer.
Table
3 illustrates the behavior of the two methods when the length of the half
of the ellipsoid's axes is approximately 1.0 and the other half is approximately 0.01.
A. W. BOJANCZYK AND A. LUTOBORSKI
In addition, there exists Q such that A. In this case the convergance of the
RSRM is particularly slow. We observed that, at least initially, the RSRM fails to
locate the minimizer in OSt(4; 2) being unable to establish the proper signs of the
entries of the matrix Q . The LSRM on the other hand approximates the minimizer
correctly.
RSRM LSRM
6
Table
4.
2.
RSRM LSRM
28 -1.43e-02 3.16e+01 0.00e+00
Table
5.
2.
Table
4 illustrates the behavior of the two methods when the ellipsoid is almost a
sphere but now A is chosen so That is the bilinear term dominates
the linear term. In this case the minimum of the functional can be estimated by
the minimum value of the bilinear term. In Table 4 esterror denotes the difference
between the minimum value of the bilinear term and the computed value of the
functional, and
Qk where Q and -
Q are the
last and penultimate approximations to the minimizer. The experiments suggest
that the LSRM requires less sweeps to obtain a satisfactory approximation to the
minimizer.
PROCRUSTES PROBLEM FOR STIEFEL MATRICES 17
Table
5 illustrates the behavior of the two methods when the ellipsoid is almost a
sphere but now A is chosen so That is the linear term dominates the
quadratic terms. In this case the minimum of the functional can be estimated by
the minimum value of the linear term. In Table 5 esterror denotes the difference
between the minimum value of the linear term and the computed value of the
functional. The experiments suggest that the RSRM requires less sweeps to obtain
a satisfactory approximation to the minimizer.
8. Remarks on Constrained Linear Least Squares Problems
The quadratically constrained linear least squares problem
min
arises in many applications [6], [7], [3]. By changing variables this problem can be
transformed into a special Procrustes problem. The Procrustes problem is
min
Procrustes problem is
equivalent to projecting the point a onto the ellipsoid OSt[\Sigma](p; 1).
1. It is clear that the vector has the direction of the normal
vector to the ellipsoid at the point \Sigmaq. Thus if \Sigmaq is the projection of a on the
ellipsoid then the vector a \Gamma \Sigmaq is parallel to the vector n. Thus there exists a
scalar fi so
can obtain an equation for fi
The parameter fi can be computed by solving this equation. Then the components
of q are given by
The equation (8.4) is the so-called secular equation which characterizes the critical
points of the Lagrangian
see [8]. Thus the multiplier fi in (8.3) is the Lagrange multiplier - in (8.5).
--R
Algorithms for the regularization of ill-conditioned least squares problems
On the stationary values of a second-degree polynomial on the unit sphere
The cyclic Jacobi method for computing the principal values of a complex matrix
Least Squares with a Quadratic Constraint
Quadratically constrained least squares and quadratic problems
Matrix Computations Johns Hopkins
On a theorem of Weyl concerning eigenvalues of linear transformations
On the convergence of the Euler-Jacobi method
A parallel algorithm for the unbalanced orthogonal Procrustes problem
Optimization techniques on Riemannian manifolds
Some matrix inequalities and metrization of the matrix space
Enzyklopadie der Elementar Mathematik B.
--TR | stiefel manifolds;projections on ellipsoids;relaxation methods;procrustes problem |
344994 | Inexact Preconditioned Conjugate Gradient Method with Inner-Outer Iteration. | An important variation of preconditioned conjugate gradient algorithms is inexact preconditioner implemented with inner-outer iterations [G. H. Golub and M. L. Overton, Numerical Analysis, Lecture Notes in Math. 912, Springer, Berlin, New York, 1982], where the preconditioner is solved by an inner iteration to a prescribed precision. In this paper, we formulate an inexact preconditioned conjugate gradient algorithm for a symmetric positive definite system and analyze its convergence property. We establish a linear convergence result using a local relation of residual norms. We also analyze the algorithm using a global equation and show that the algorithm may have the superlinear convergence property when the inner iteration is solved to high accuracy. The analysis is in agreement with observed numerical behavior of the algorithm. In particular, it suggests a heuristic choice of the stopping threshold for the inner iteration. Numerical examples are given to show the effectiveness of this choice and to compare the convergence bound. | Introduction
Iterative methods for solving linear systems are usually combined with a preconditioner that can be
easily solved. For some practical problems, however, a natural and efficient choice of preconditioner
may be one that can not be solved easily by a direct method and thus may require an iterative
method (called inner iteration) itself to solve the preconditioned equations. There also exist cases
where the matrix operator contains inverses of some other matrices, an explicit form of which is
not available. Then the matrix-vector products can only be obtained approximately through an
inner iteration. The linear systems arising from the saddle point problems [3] is one such example.
For these types of problems, the original iterative method will be called the outer iteration and
the iterative method used for solving the preconditioner or forming the matrix-vector products is
called the inner iteration.
A critical question in the use of the inner-outer iterations is to what precision the preconditioner
should be solved, i.e., what stopping threshold should be used in the inner iteration. Clearly, a
very high precision will render the outer iteration close to the exact case and a very low one on the
other hand could make the outer iteration irrelevant. An optimal one will be to allow the stopping
Address Computing and Computational Mathematics Program, Department of Computer Science,
Stanford University, Stanford, CA 94305, USA E-mail : golub@sccm.stanford.edu. Research supported in part by
National Science Foundation Grant DMS-9403899.
Department of Applied Mathematics, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2.
E-mail: ye@gauss.amath.umanitoba.ca. Research supported by Natural Sciences and Engineering Research Council
of Canada.
threshold as large as possible in order to reduce the cost of inner iteration, while maintaining the
convergence characteristic of the outer iteration. In other words, we wish to combine the inner
and outer iterations so that the total number of operations is minimized. An answer to the above
question requires understanding of how the accuracy in the inner iteration affects the convergence of
the outer iteration. This has been studied by Golub and Overton [5, 6] for the Chebyshev iteration
and the Richardson iteration, by Munthe-Kaas [8] for preconditioned steepest descent algorithms,
by Elman and Golub [3] for the Uzawa algorithm for the saddle point problems, and by Giladi,
Golub and Keller [4] for the Chebyshev iteration with a varying threshold. Golub and Overton also
observed the interesting phenomenon for the preconditioned conjugate gradient algorithm (as the
outer iteration) that the convergence of CG could be maintained for very large stopping threshold
in the inner iteration; yet the convergence rate may be extremely sensitive to the change of the
threshold at certain point.
Given that the known convergence properties of CG depends strongly on a global minimization
property, the phenomenon found in [5, 6] seems very surprising and makes the inexact preconditioned
conjugate gradient an attractive option for implementing preconditioners. However, there
has been no theoretical analysis to explain these interesting phenomena, and its extreme sensitivity
to the threshold makes it hard to implement it in practice. The present paper is an effort in this
direction. We shall formulate and analyze an inexact preconditioned conjugate gradient method for
a symmetric positive definite system. By establishing a local relation between consecutive residual
norms, we prove a linear convergence property with a bound on the rate and illustrate that the
convergence rate is relatively insensitive to the change of threshold up to certain point. In par-
ticular, the result is used to arrive at a heuristic choice of the stopping threshold. We also show,
using a global relation as in [9], that the algorithm may have the superlinear convergence property
when the global orthogonality is nearly preserved, which usually occurs with smaller thresholds
and shorter iterations.
The paper is organized as follows. In section 2, we present the inexact preconditioned conjugate
gradient algorithm, and some of its properties together with two numerical examples illustrating
its numerical behaviour. We then give in section 3 a local analysis, showing the linear convergence
property, and in section 4, a global analysis, showing the superlinear convergence property. Finally,
we give some numerical examples in section 5 to illustrate our results.
We shall use the standard notation in numerical analysis. The M-norm k \Delta k M and the M -
inner product are defined by kvk
respectively. cond(A) denotes the spectral condition number of a matrix A. A + denotes the
Moore-Penrose generalized inverse of A.
2 PCG with inner-outer Iterations
We consider the preconditioned conjugate gradient (PCG) algorithm for solving with a
preconditioner M , where both A and M are symmetric positive definite. Then at each step of PCG
iteration, a search direction p n is found by first solving the preconditioned system Mz . If a
direct method is not available for solving M , an iterative method, possibly (though not necessarily)
CG itself, can be used to solve it. In this case, we find z n by the inner iteration such that
with the stopping criterion
is the stopping threshhold in the inner iteration. Here we have used the M \Gamma1 -norm
for the theoretical convenience. In practical computations, one can replace (RES) by the following
criterion in terms of the 2-norm
Because z n is only used to define the search direction p n in PCG, only its direction has any
significance. Therefore, we also propose the following direction based stopping criterion
i.e., the acute angle between Mz n and r n in the M \Gamma1 -inner product (or the acute angle between
z n and M \Gamma1 r n in the M-inner product) is at most '.
Remark 1: It can be easily proved (using a triangular relation in the M \Gamma1 -inner product) that if
(RES) is satisfied, then
i.e., (ANG) is satisfied with
sin
However, the converse implication is not necessarily true. Therefore the criterion (ANG) is less
restrictive than (RES). Namely, (ANG) may be satisfied before (RES) does in the inner itera-
tion. However, our numerical tests suggest that there is usually little difference between the two.
Our introduction of (ANG) and the use of M \Gamma1 -inner product are primarily for the theoretical
convenience.
Remark 2: If the inner iteration is carried out by CG, the residual e n is orthogonal to Mz n in
the M \Gamma1 -inner product. Then again by a triangular relation, we have
s
So in this case, (RES) and (ANG) are equivalent. It can also be proved that the CG iteration
minimizes the residual as well as the angle in (ANG) over the Krylov subspace concerned.
Now, we formulate the inexact preconditioned conjugate gradient algorithm (IPCG) as follows.
Inexact Preconditioned Conjugate Gradient Algorithm (IPCG):
For
z T
equivalently ff
end for
Clearly, if j or the above algorithm is the usual PCG. With j (or ') ? 0, we allow the
preconditioner Mz to be solved only approximately.
Remark 3. As is well-known, there are several different formulations for ff n and fi n in the algo-
rithm, which are all equivalent in the exact PCG. For the inexact case, this may no longer be true.
Our particular formulation here implies that some local orthogonality properties are maintained
even with inexact z n , as given in the next lemma. Our numerical tests also show that it indeed
leads to a more stable algorithm. For example, if fi n is computed by the old form z T
the algorithm may not converge for larger j.
Lemma 1 The sequences generated by Algorithm IPCG satisfies the following local orthogonality
Proof First
z T
Then
Thus z T
z T
Now supposing p T
and
Thus the lemma follows from p T
by an induction argument.
By eliminating p n in the recurrence, IPCG can be written with two consecutive steps as a second
order recurrence
a unified form that includes Chebyshev and second order Richardson iterations (cf [2]). From the
local orthogonality, we obtain the following local minimization property.
Proposition 1
and
Proof By the orthogonality, kr
A \Gamma1 . It
is easy to check that z T n r n 6= 0 for either (RES) or (ANG). So ff n 6= 0 and thus we have the strict
inequality.
For the second inequality, we just need to show r n+1 ? A \Gamma1 Az n and r This follows from
r T
Recall that the steepest descent method constructs r sd
Ar n from r n such that
kr sd
. A bound on kr sd
can be obtained from the Kantrovich
inequality. An inexact preconditioned version of the steepest descent method [8] is to construct
r sd
n Az n such that
kr sd
A
z
Then the second inequality of the above proposition shows that kr n+1 k A
(by choosing
2.1 A Numerical Example
To motivate our discussion in the next two sections, we now present two numerical examples to
illustrate the typical convergence behaviour of IPCG. We simulate IPCG by artificially perturbing
applying PCG with I and z is a pseudo random
vector with entries uniformly distributed in (\Gamma0:5; 0:5), and j is a perturbation parameter. The following
are the convergence curves for two diagonal matrices
(10000 \Theta 10000) and
1000] (1000 \Theta 1000) with a random constant term.
We first note that the A \Gamma1 -norm of the residuals decreases monotonically (cf Prop. 2.1) and
convergence occurs for quite large j. We further observe from the second example that for smaller
the superlinear convergence property of exact CG is recovered. In fact, they follow the
unperturbed case very closely. When j is larger, the convergence tends to be linear, with a rate
depending on j. Interestingly, the convergence rate is insensitive to the change of magnitude of j
until a certain point (0:1 in the first and 0:4 in the second), around which it becomes extremely
sensitive to a relatively small change of j.
Our explanation and analysis to the above observation will be given in two categories, although
there is no clear boundary between the two. For smaller j, some global properties (e.g. the
orthogonality among r n ) is expected to be preserved, which leads to a global near minimization
property and thus the superlinear convergence. For larger j, the global minimization property will
be lost. However, we will show that some local relation from r n to r n+1 is not destroyed, which turns
out to preserve a linear convergence property. We consider each of these two categories separately
in the next two sections.
3 Linear Convergence of IPCG
In this section we present a local relation between consecutive residual norms, which lead to a linear
convergence bound for IPCG. Our basic idea is to relate the reduction factor of IPCG
Figure
1: Convergence curves for various perturbations j (solid (bottom):
number of iterations
A-inverse-norm
of
residuals
number of iterations
A-inverse-norm
of
residuals
to the reduction factor of the steepest descent method (along the inexact preconditioned gradient
direction z n , see (3) of section 2)
r=rn \GammatAz n
From Proposition 2.1, we have oe n -
Theorem 1 Let oe n and fl n be defined as in (4) and (5) for IPCG. Then for
(a) oe
with
z
(b)
r T
r T
\Gammag n
\Gammag
where g k is defined by
and
Proof From r T
r T
Substituting ff
Apn in, we obtain
r T
Thus
r T
using z
Noting
that
from (2), we have
ff \Gamma2
z T
z T
oe
n r n , and we have used Ap
substituting
the above and (5) into (8), we obtain part (a).
For part (b), let
. From (a),
oe
oe
first step of IPCG amounts to one step of the steepest descend method),
by the definition. Now (b) follows from r T
and oe
In the exact case, j by the orthogonality and the above equations are simplified. In the
inexact case, using the local orthogonality (Lemma 2.1), j n can be bounded. We shall consider
the stopping criterion (ANG) only in this section. Since (RES) implies (ANG) with sin
results here apply to the (RES) case as well by simply replacing sin ' by j.
First we need the following lemma that is geometrically clear. (An acute angle ' 2 [0; -=2]
between two vectors u; v in an inner product ! \Delta; \Delta ? is defined by cos
resp.) be the acute angle between u and u
then the acute angle between u 1 and v 1 is at least -=2
Proof We can assume that u; u are all unit vectors. Then we write
vector orthogonal to u ( v resp.
Let
where we note that u is orthogonal to v and v orthogonal to u.
is the norm associated with
- a cos ' 1 sin
- a cos ' 1 sin
where we note that the second last expression is an increasing function of a 2 [0; 1] for
and therefore is bounded by its value at a = 1.
Lemma 3 If z n satisfies (ANG) with ' -=4, then
z T
z
Proof First (Mz
By applying the above Lemma with the inner
product defined by M \Gamma1 to the pairs Mz which satisfy (ANG), we obtain
i.e. jz T
On the other hand, jz T
Furthermore, it is easy to check that, for any vector v
A
where - min and - max denotes respectively the minimal and the maximal eigenvalues. Thus,
which completes the proof of the lemma.
We now present a bound for fl n that has been derived by Munthe-Kaas [8]. We shall use one of
the variations of the bounds in [8, Theorem 5] and we repeat some arguments of [8] below. First
the following lemma is a generalization of [1, Corollary 4]
Lemma 4 [8, Lemma 2] Suppose p and q 2 R n are such that
kpk kqk
(p
(q
where W is a symmetric positive definite matrix and -
kpk kqk
So applying the above Lemma to p; q and W , we obtain
r T
(p
(q
1\Gammasin ' . Then we have
r T
We now present a linear convergence bound for IPCG.
Theorem 2 If
converges and for even n,
where
s
Proof We consider two consecutive oe n of IPCG. By
apply. So from Theorem 1 we have
oe
where fl is as defined in Lemma 5. Then
\Gammag n+1
and
. So
So, oe n oe n+1 - (oeK) 4 . Therefore, for even n, the bound follows from kr n+1 k A
shows that IPCG converges. Finally, by expanding oeK in terms
of
using
In terms of the stopping criterion (RES), the same result holds with sin ' replaced by j (see
Remark 1). Therefore, if
converge with a rate depending
on oe (the standard PCG convergence rate). In particular, the bound indicates that the IPCG
convergence rate is relatively insensitive to the magnitude of j for smaller j but increases sharply
at certain point (see the rate curves in Fig. 2). However, the bound on the convergence rate here
tends to be pessimistic and it does not recover the classical bound for the case
it does seem to reflect the trend of how the rate changes as j changes.
To compare the bound with the actual numerical results, we consider an example similar to the
one in section 2. Namely, we consider a diagonal matrix whose eigenvalues are linearly distributed
on [1; -], and apply PCG with the same kind of random perturbation. We carry out IPCG and
compute the actual convergence rate by (kr ranging from 10 \Gamma6 to 1. In
Figure
2 the graphs of the bound and the actual computed rate are plotted (for the case
and 100).
By comparing the actual convergence rate and its bound, we observe that the bound, as a worst
case bound, follows the trend of the actual convergence rate curve quite closely. The bound reaches
1 at
In particular, j 0 seems to be a good estimate of the point at which the actual rate starts to increase
significantly (i.e., the slope is greater than 1). Note that when the slope is less than 1, any increase
Figure
2: Actual Convergence Rate and its Bound verses j
actual rate
bound
00.50.70.9stopping threshold eta=sin(theta)
convergence
rate
actual rate
bound
00.750.850.95stopping threshold eta=sin(theta)
convergence
rate
in the rate may be compensated by a comparable or more increase in j. We therefore advocate a
value around j 0 as a heuristic choice of the stopping threshold for inner iterations. Our numerical
examples in Section 5 confirm that this is indeed a reasonable strategy in balancing the numbers
of inner and outer iterations.
4 Superlinear Convergence of IPCG
The bound in the previous section demonstrates linear convergence of IPCG. We observed in section
2 that for smaller j, IPCG may actually enjoy the superlinear convergence property of exact CG.
Here we explain this phenomenon by the method of [9], i.e. by considering a global equation that
is approximately satisfied by IPCG. We remark that a global property is necessary in examining
superlinear convergence.
Let
From r respectively, we obtain
the following matrix equations for IPCG
r
where
and U n =B
Combining the equations in (11), we obtain the following equation
AM
r
Note that -
U n is a tridiagonal matrix such that e T
Therefore, the inexact case satisfies an equation similar to the exact
case with the error term AM
We rewrite (13) in a scaled form as in [9, Eq. (8)]. Let D
Then from (13)
r n+1
where
By the
stopping criteria,
Now applying the
same argument in the proof of [9, Theorem 3.5] to (14), we obtain the following theorem. (The
details are omitted here).
Theorem 3 Assume r are linearly independent and let V T
(i.e. the matrix consisting of the first n rows of -
where
r T
To interpret the above result, we note that kr T
R n k=kr n+1 k A \Gamma1 is a measure of M
orthogonality among the residual vectors. If then the residuals of PCG are orthogonal
with respect to M \Gamma1 and hence K but is not too large, the loss of
orthogonality among the residuals may be gradual and it will take modest length of run before
n are of magnitude j. Therefore, in this regime, kr n+1 k A
. Note that ffl
decreases superlinearly because of annihilation of the extreme spectrum (see [10]).
See [9] for some artificially perturbed numerical examples.
In summary, if the global M \Gamma1 -orthogonality among the residual vectors are nearly maintained
to certain step, the residual of IPCG is very close to that of exact PCG up to that point and thus
may display the superlinear convergence property.
5 Numerical Examples
In this section, we present numerical examples of inner-outer iterations, testing various choices of
the stopping threshold as compared with
- of (10), where A). For this
purpose, we shall consider
- for d ranging 0:01 to 5 as well as
We consider
with the homogeneous Dirichlet boundary condition Using uniform five-point finite
difference with the step size
N+1 , we obtain n \Theta n (with
A is the discretization of \Gammar(a(x; y)r) and is a (N \Theta N) block tridiagonal matrix.
We consider solving this equation using the block Jacobi preconditioner M (i.e. M is the block
diagonal part of A) or using the discrete Laplacian L (i.e. L is the discretization of \Gamma\Delta) as the
preconditioner. For the purpose of testing inner-outer iterations, an iterative method (i.e. SOR,
CG or preconditioned CG) is used for the preconditioners. We denote them by CG-SOR, CG-CG
and CG-PCG respectively.
We compare the number of outer (n outer ) and total inner (n inner ) iterations required to reduce
the A \Gamma1 -norm of the residual by 10 \Gamma8 . will be used in our tests.
In the first test, a(x; and the discrete Laplacian
preconditioner L is used. SOR (with the optimal parameter [11]), CG and PCG with the modified
incomplete Cholesky factorization are used in the inner iteration to solve r. The results are
listed in Tables 1, 2 and 3.
used.
Table
1: Iteration Counts for CG-SOR with discrete Laplacian preconditioner L
In the first rows of the tables, we also list the results for 1, in which case one step of inner
iteration is carried out for each outer iteration. In this way, it is closely related to those cases with
very close to 1. Interestingly, however, this extreme case is usually equivalent to applying CG
directly to the original matrix A or with a preconditioner. For example, if the inner iteration is
CG itself, one step of inner CG produces inner solution z n in the same direction of r n and thus the
outer iteration is exactly CG applied to A (see Appendix for a detailed discussion for other inner
solvers). This is why that convergence still occurs in these extreme cases and our analysis, which are
based on Mz only, would not include this case. However, if z n is chosen only to satisfy
1.0 136 136 1.0 232 232 1.0 478 478
Table
2: Iteration Counts for CG-CG with discrete Laplacian preconditioner L
Table
3: Iteration Counts for CG-PCG with discrete Laplacian preconditioner L
but not in the particular way here, the convergence is not expected. The relatively
small iteration counts are due to the fact that the original system is not too ill-conditioned.
In comparing the performance of different j with respect to the outer iteration counts, it appears
that lies right around the point at which the outer iteration count starts to increase significantly.
This confirms our convergence analysis for the outer iteration. For the total inner iteration counts,
the performance for larger j seems to be irregular among different inner solvers, and this can
be attributed to the different convergence characteristic at the extreme case different
inner solvers (see Appendix). In particular, we observe that for larger j, CG-CG (or CG-PCG)
performs better than CG-SOR. This phenomenon was also observed in [3]. Overall, j 0 seems to be
a reasonable choice in balancing the numbers of inner and outer iterations.
In the above example, - and thus j 0 remains nearly constant for different N . In the second
test, we use the block Jacobi preconditioner M and a(x;
y. Then
and 100 respectively. Both SOR and CG are used in the inner iteration. The results are listed in
Tables
4 and 5 in an Appendix. Similar behaviour was observed.
6 Conclusion
We have formulated and analyzed an inexact preconditioned conjugate gradient method. The
method is proved to be convergent for fairly large thresholds in the inner iterations. A linear
convergence bound, though pessimistic, is obtained, which leads to a heuristic choice of the stopping
threshold in the inner iteration. Numerical tests demonstrate the efficiency of the choice.
It still remains an unsolved problem to choose an optimal j that minimizes the total amount of
work (see [4]), although j 0 here provides a first approximation. Solving such a problem demands a
sharper bound in the outer iteration and analysis of the near extreme threshold cases. It is not clear
whether a better bound could be obtained from the approach of the present paper. It seems there
are more properties of IPCG awaiting for discovery. For example, better bounds for the steepest
descent reduction factor fl n may exist for IPCG, which in turn would lead to improvement to the
results here.
--R
Some inequalities involving the euclidean condition of a matrix
A Generalized Conjugate Gradient Method for the Numerical Solution of Elliptic Partial Differential Equations
Inexact and preconditioned Uzawa algorithms for saddle point problems
Inner and outer iterations for the Chebyshev algorithm Stanford SCCM Technical Report 95-12
Convergence of a two-stage Richardson iterative procedure for solving systems of linear equations
The convergence of inexact Chebyshev and Richardson iterative methods for solving linear systems.
Matrix Computations
The convergence rate of inexact preconditioned steepest descent algorithm for solving linear systems
Analysis of the Finite Precision Bi-Conjugate Gradient algorithm for Nonsymmetric Linear Systems
The rate of convergence of conjugate gradients
Iterative Solution of Large Linear Systems Academic Press
--TR
--CTR
Carsten Burstedde , Angela Kunoth, Fast iterative solution of elliptic control problems in wavelet discretization, Journal of Computational and Applied Mathematics, v.196 n.1, p.299-319, 1 November 2006
Angela Kunoth, Fast Iterative Solution of Saddle Point Problems in Optimal Control Based on Wavelets, Computational Optimization and Applications, v.22 n.2, p.225-259, July 2002
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | conjugate gradient method;inexact preconditioner;inner-outer iterations |
344999 | Distributed Schur Complement Techniques for General Sparse Linear Systems. | This paper presents a few preconditioning techniques for solving general sparse linear systems on distributed memory environments. These techniques utilize the Schur complement system for deriving the preconditioning matrix in a number of ways. Two of these preconditioners consist of an approximate solution process for the global system, which exploits approximate LU factorizations for diagonal blocks of the Schur complement. Another preconditioner uses a sparse approximate-inverse technique to obtain certain local approximations of the Schur complement. Comparisons are reported for systems of varying difficulty. | Introduction
The successful solution of many "Grand-Challenge" problems in scientific computing depends
largely on the availability of adequate large sparse linear system solvers. In this context, iterative
solution techniques are becoming a mandatory replacement to direct solvers due to their
more moderate computational and storage demands. A typical "Grand-Challenge" application
requires the use of powerful parallel computing platforms as well as parallel solution algorithms
to run on these platforms. In distributed-memory environments, iterative methods are relatively
easy to implement compared with direct solvers, and so they are often preferred in spite of their
unpredictable performance for certain types of problems.
However, users of iterative methods do face a number of issues that do not arise in direct
solution methods. In particular, it is not easy to predict how fast a linear system can be solved
to a certain accuracy and whether it can be solved at all by certain types of iterative solvers.
This depends on the algebraic properties of the matrix, such as the condition number and the
clustering of the spectrum.
With a good preconditioner, the total number of steps required for convergence can be reduced
dramatically, at the cost of a slight increase in the number of operations per step, resulting
This work was supported in part by ARPA under grant number NIST 60NANB2D1272, in part by NSF under
grant CCR-9618827, and in part by the Minnesota Supercomputer Institute.
y Department of Computer Science and Engineering, University of Minnesota, 200 Union Street S.E., Min-
neapolis, MN 55455, e-mail:saad@cs.umn.edu.
z Department of Computer Science, 320 Heller Hall, 10 University Drive, Duluth, Minnesota 55812-2496.
masha@d.umn.edu.
in much more efficient algorithms in general. In distributed environments, an additional benefit
of preconditioning is that it reduces the parallel overhead, and therefore it decreases the total
parallel execution time. The parallel overhead is the time spent by a parallel algorithm in performing
communication tasks or in idling due to synchronization requirements. The algorithm
will be efficient if the construction and the application of the preconditioning operation both
have a small parallel overhead. A parallel preconditioner may be developed in two distinct
ways: extracting parallelism from efficient sequential techniques or designing a preconditioner
from the start specifically for parallel platforms. Each of these two approaches has its advantages
and disadvantages. In the first approach, the preconditioners yield the same good convergence
properties as those of a sequential method but often have a low degree of parallelism, leading
to inefficient parallel implementations. In contrast, the second approach usually yields preconditioners
that enjoy a higher degree of parallelism, but that may have inferior convergence
properties.
This paper addresses mainly the issue of developing preconditioners for distributed sparse
linear systems by regarding these systems as distributed objects. This viewpoint is common in
the framework of parallel iterative solution techniques [15, 14, 18, 20, 10, 1, 2, 8] and borrows
ideas from domain decomposition methods that are prevalent in the PDE literature. The key
issue is to develop preconditioners for the global linear system by exploiting its distributed
data structure. Recently, a number of methods have been developed which exploit the Schur
complement system related to interface variables, see for example, [12, 2, 8]. In particular,
several distributed preconditioners included in the ParPre package [8] employ variants of Schur
complement techniques. One difference between our work and [2] is that our approach does not
construct a matrix to approximate the global Schur complement. Instead, the preconditioners
constructed are entirely local. However, they also have a global nature in that they do attempt
to solve the global Schur complement system approximately by an iterative technique.
The paper is organized as follows. Section 2 gives a background regarding distributed representations
of sparse linear systems. Section 3 starts with a general description of the class
of domain decomposition methods known as Schur complement techniques. This section also
presents several distributed preconditioners that are defined via various approximations to the
Schur complement. The numerical experiment section (Section 4) contains a comparison of these
preconditioners for solving various distributed linear systems. Finally, a few concluding remarks
are made in Section 5.
Distributed sparse linear systems
Consider a linear system of the form
where A is a large sparse nonsymmetric real matrix of size n. Often, to solve such a system on
a distributed memory computer, a graph partitioner is first invoked to partition the adjacency
graph of A. Based on the resulting partitioning, the data is distributed to processors such that
pairs of equations-unknowns are assigned to the same processor. Thus, each processor holds a
set of equations (rows of the linear system) and vector components associated with these rows.
A good distributed data structure is crucial for the development of effective sparse iterative
solvers. It is important, for example, to have a convenient representation of the local equations
as well as the dependencies between the local and external vector components. A preprocessing
phase is thus required to determine these dependencies and any other information needed during
the iteration phase. The approach described here follows that used in the PSPARSLIB package,
see [20, 22, 14] for additional details.
Figure
1 shows a "physical domain" viewpoint of a sparse linear system. This representation
borrows from the domain decomposition literature - so the term "subdomain" is often used
instead of the more proper term "subgraph". Each point (node) belonging to a subdomain is
actually a pair representing an equation and an associated unknown. It is common to distinguish
between three types of unknowns: (1) Interior unknowns that are coupled only with local
equations; (2) Local interface unknowns that are coupled with both non-local (external) and
local equations; and (3) External interface unknowns that belong to other subdomains and are
coupled with local equations. The matrix in Figure 2 can be viewed as a reordered version of
the equations associated with a local numbering of the equation-unknown pairs. Note that local
equations do not necessarily correspond to contiguous equations in the original system.
points
External
interface points
Internal points
Figure
1: A local view of a distributed sparse matrix.
In
Figure
2, the rows of the matrix assigned to a certain processor have been split into
two parts: the local matrix A i , which acts on the local vector components, and the rectangular
interface matrix X i , which acts on the external vector components. Accordingly, the local
equations can be written as follows:
represents the vector of local unknowns, y i;ext are the external interface variables, and
b i is the local part of the right-hand side vector. Similarly, a (global) matrix-vector product Ax
can be performed in three steps. First, multiply the local vector components x i by A i , then
receive the external interface vector components y i;ext from other processors, and finally multiply
the received data by X i and add the result to that already obtained with A i .
Figure
2: A partitioned sparse matrix.
An important feature of the data structure used is the separation of the interface points
from the interior points. In each processor, local points are ordered such that the interface
points are listed last after the interior points. Such ordering of the local data presents several
advantages, including more efficient interprocessor communication, and reduced local indirect
addressing during matrix-vector multiplication.
With this local ordering, each local vector of unknowns x i is split into two parts: the sub-vector
of internal vector components followed by the subvector y i of local interface vector
components. The right-hand side b i is conformally split into the subvectors f i and g i , i.e.,
When block partitioned according to this splitting, the local matrix A i residing in processor i
has the form:
so the local equations (2) can be written as follows:
''
Here, N i is the set of indices for subdomains that are neighbors to the subdomain i. The term
a part of the product X i y i;ext which reflects the contribution to the local equation from
the neighboring subdomain j. The sum of these contributions is the result of multiplying X i by
the external interface unknowns:
It is clear that the result of this multiplication affects only the local interface unknowns, which
is indicated by zero in the top part of the second term of the left-hand side of (4).
The preprocessing phase should construct the data-structure for representing the matrices A i ,
and X i . It should also form any additional data structures required to prepare for the intensive
communication that takes place during the iteration phase. In particular, each processor needs
to know (1) the processors with which it must communicate, (2) the list of interface points,
and (3) a break-up of this list into sublists that must be communicated among neighboring
processors. For further details see [20, 22, 14].
3 Derivation of Schur complement techniques
Schur complement techniques refer to methods which iterate on the interface unknowns only,
implicitly using internal unknowns as intermediate variables. A few strategies for deriving Schur
complement techniques will now be described. First, the Schur complement system is derived.
3.1 Schur complement system
Consider equation (2) and its block form (4). Schur complement systems are derived by eliminating
the variable u i from the system (4). Extracting from the first equation u
yields, upon substitution in the second equation,
is the "local" Schur complement
The equations (5) for all subdomains i constitute a system of equations involving
only the interface unknown vectors y i . This reduced system has a natural block structure
related to the interface points in each subdomain:B
The diagonal blocks in this system, the matrices S i , are dense in general. The off-diagonal blocks
which are identical with those involved in the global system (4), are sparse.
The system (7) can be written as
is the vector of all the interface variables and g
the right-hand side vector. Throughout the paper, we will abuse the notation slightly for the
transpose operation, by defining
rather than the actual transpose of the matrix with column vectors y p. The matrix
S is the "global" Schur complement matrix, which will be exploited in Section 3.3.
3.2 Schur complement iterations
One of the simplest ideas that comes to mind for solving the Schur complement system (7) is
to use a block relaxation method associated with the blocking of the system. Once the Schur
complement system is solved the interface variables are available and the other variables are
obtained by solving local systems. As is known, with a consistent choice of the initial guess, a
block-Jacobi (or SOR) iteration with the reduced system is equivalent to a block-Jacobi iteration
(resp. SOR) on the global system (see, e.g., [11], [19]). The k-th step of a block-Jacobi iteration
on the global system takes the following local form:
\GammaS
Here, an asterisk denotes a nonzero block whose actual expression is unimportant. A worthwhile
observation is that the iterates with interface unknowns y satisfy an independent relation
or equivalently
which is nothing but a Jacobi iteration on the Schur complement system (7).
From a global viewpoint, a primary iteration for the global unknowns is
As was explained above, the vectors of interface unknowns y associated with the primary iteration
satisfy an iteration (called Schur complement iteration)
The matrix G is not known explicitly, but it is easy to advance the above iteration by one step
from an arbitrary (starting) vector v, meaning that it is easy to compute Gv+h for any v. This
viewpoint was taken in [13, 12].
The sequence y (k) can be accelerated with a Krylov subspace algorithm, such as GMRES
[21]. One way to look at this acceleration procedure is to consider the solution of the system
The right-hand side h can be obtained from one step of the iteration (12) computed for the
initial vector 0, i.e.,
Given the initial guess y (0) the initial residual s can be obtained from
Matrix-vector products with I \Gamma G can be obtained from one step of the primary iteration. To
compute G)y, proceed are as follows:
1. Perform one step of the primary iteration
y
2. Set w := y
3. Compute
The presented global viewpoint shows that a Schur complement technique can be derived for
any primary fixed-point iteration on the global unknowns. Among the possible choices of the
primary iteration there are Jacobi and SOR iterations as well as iterations derived (somewhat
artificially) from ILU preconditioning techniques.
The main disadvantage of solving the Schur complement system is that the solve for the
system (needed to operate with the matrix S i ) should be accurate. We can compute
the dense matrix S i explicitly or solve system (5) by using a computation of the matrix-vector
product S i y, which can be carried out with three sparse matrix-vector multiplies and one accurate
linear system solve. As is known (see [23]), because of the large computational expense
of these accurate solves, the resulting decrease in iteration counts is not sufficient to make the
Schur complement iteration competitive. Numerical experiments will confirm this.
3.3 Induced Preconditioners
A key idea in domain decomposition methods is to develop preconditioners for the global system
(1) by exploiting methods that approximately solve the reduced system (7). These tech-
niques, termed "induced preconditioners" (see, e.g., [19]), can be best explained by considering
a reordered version of the global system (1) in which all the internal vector components
are labeled first followed by all the interface vector components y. Such re-ordering
leads to a block systemB
which also can be rewritten as ' B F
'' u
y
Note that the B block acts on the interior unknowns. Eliminating these unknowns from the
system leads to the Schur complement system (7).
Induced preconditioners for the global system are obtained by exploiting a block LU factorization
for A. Consider the factorization
I
where S is the global Schur complement
This Schur complement matrix is identical to the coefficient matrix of system (7) (see, e.g., [19]).
The global system (15) can be preconditioned by an approximate LU factorization constructed
such that
and
I
with M S being some approximation to S.
Two techniques of this type are discussed in the rest of this section. The first one exploits
the relation between an LU factorization and the Schur complement matrix, and the second
uses approximate-inverse techniques to obtain approximations to the local Schur complements.
The preconditioners presented next are based on the global system of equations for the Schur
unknowns (system (5) or the equivalent form (7)) and on the global block LU factorization (16),
or rather its approximated version (17).
3.4 Approximate Schur LU preconditioner
The idea outlined in the previous subsection is that, if an approximation ~
S to the Schur complement
S is available, then an approximate solve with the whole matrix A, for all the global
unknowns can be obtained which will require (approximate or exact) solves with ~
S and B. It is
also possible to think locally in order to act globally. Consider (4) and (5). As is readily seen
from (4), once approximations to all the components of the interface unknowns y i are available,
corresponding approximations to the internal components u i can be immediately obtained from
solving
with the matrix B i in each processor. In practice, it is often simpler to solve a slightly larger
system obtained from (2) or
because of the availability of the specific local data structure.
Now return to the problem of finding approximate solutions to the Schur unknowns. For
convenience, (5) is rewritten as a preconditioned system with the diagonal blocks:
Note that this is simply a block-Jacobi preconditioned Schur complement system. System (19)
may be solved by a GMRES-like accelerator, requiring a solve with S i at each step. There are
at least three options for carrying out this solve with
1. Compute each S i exactly in the form of an LU factorization. As will be seen shortly, this
representation can be obtained directly from an LU factorization of A i .
2. Use an approximate LU factorization for S i , which is obtained from an approximate LU
factorization for A i .
3. Obtain an approximation to S i using approximate-inverse techniques (see the next sub-
section) and then factor it using an ILU technique.
The methods in options (1) and (2) are based on the following observation (see [19]). Let A i
have the form (3) and be factored as A
and U
Then, a rather useful result is that L S i
is equal to the Schur complement S i associated with
the partitioning (3). This result can be easily established by "transferring" the matrices UB i
and U S i
from the U-matrix to the L-matrix in the factorization:
I
I
from which the result S
follows by comparison with (16).
When an approximate factorization to A i is available, an approximate LU factorization to S i
can be obtained canonically by extracting the related parts from the L i and U i matrices. In other
words, an ILU factorization for the Schur complement is the trace of the global ILU factorization
on the unknowns associated with the Schur complement. For a local Schur complement, the
ILU factorization obtained in this manner leads to an approximation ~
S i of the local Schur
complement S i . Instead of the exact Schur complement system (5), or equivalently (19), the
following approximate (local) Schur complement system derived from (19) can be considered on
each processor i:
The global system related to equations (20) can be solved by a Krylov subspace method, e.g.,
GMRES. The matrix-vector operation associated with this solve involves a certain matrix M S
(cf. equation (17)). The global preconditioner (17) can then be defined from M S .
Given a local ILU factorization
with which the factorization
is associated, the following algorithm applies in each processor the global approximate Schur LU
preconditioner to a block vector (f to obtain the solution . The algorithm uses m
iterations of GMRES without restarting to solve the local part of the Schur complement system
(20). Then, the interior vector components are calculated using equation (18) (Lines 21-25). In
the description of Algorithm 3.1, P represents the projector that maps the whole block vector
into the subvector vector g i associated with the interface variables.
Algorithm 3.1 Approximate Schur-LU solution step with GMRES
1. Given: local right-hand side
2. Define an (m
Hm and set -
3. Arnoldi process:
4. y i := 0
5. r := (L i U
7. For do
8. Exchange interface vector components y i
9. t := (L S i
11. For
12. h l;j := (w; v l )
13. w
14. EndDo
15. h j+1;j := kwk 2 and v j+1 := w=h j+1;j
16. EndDo
17.
18. Form the approximate solution for interface variables:
19. Compute
21. Find other local unknowns:
22. Exchange interface vector components y i
24. rhs := rhs \Gamma
25.
A few explanations are in order. Lines 4-6 compute the initial residual for the GMRES iteration
with initial guess of zero and normalize this residual to obtain the initial vector of the Arnoldi
basis. According to the expression for the inverse of A i in (8), we have
A
which is identical to the expression in Line 5 with A i replaced by its approximation L i U i .
Comparing the bottom part of the right-hand side of the above expression with that on the
right-hand side of (20), it is seen that the vector P r obtained in Line 6 of the algorithm is
indeed an approximation to the local right-hand side of the Schur complement system. Lines 8-
correspond to the matrix-vector product with the preconditioned Schur complement matrix,
i.e., with the computation of the left-hand side of (20).
3.5 Schur complements via approximate inverses
Equation (17) describes in general terms an approximate block LU factorization for the global
system (15). A particular factorization stems from approximating the Schur complement matrix
using one of several approximate-inverse techniques described next.
Given an arbitrary matrix A, approximate-inverse preconditioners consist of finding an approximation
Q to its inverse, by solving approximately the optimization problem [3]:
min
Q2S
in which S is a certain set of n \Theta n sparse matrices. This minimization problem can be decoupled
into n minimization problems of the form
are the jth columns of the identity matrix and a matrix Q 2 S, respectively.
Note that each of the n columns can be computed independently. Different strategies for selecting
a nonzero structure of the approximate-inverse are proposed in [4] and [9]. In [9] the initial
sparsity pattern is taken to be diagonal with further fill-in allowed depending on the improvement
in the minimization. The work [4] suggests controlling the sparsity of the approximate inverse
by dropping certain nonzero entries in the solution or search directions of a suitable iterative
method (e.g., GMRES). This iterative method solves the system Am
In this paper, the approximate-inverse technique proposed in [4] and
[5] is used.
Consider the local matrix A i blocked as
and its block LU factorization similar to the one given by (16). The sparse approximate-inverse
technique can be applied to approximate B
with a certain matrix Y i (as it is done in [5]).
The resulting matrix Y i is sparse and therefore
is a sparse approximation to S i . A further approximation can be constructed using an ILU
factorization for the matrix M S i
As in the previous subsection, an approximation M S to the global Schur complement S can
be obtained by approximately solving the reduced system (7), i.e., by solving its approximated
version
~
the right-hand side of which can be also computed approximately. System (22) requires an
approximation ~
S i to the local Schur complement S
. The matrix M S i
defined
from the approximate-inverse technique outlined above can be used for ~
Now that an approximation
to the Schur complement matrix is available, an induced global preconditioner M to
the matrix A can be defined from considering the global system (14), also written as (15). The
Schur variables correspond to the bottom part of the linear system. The global preconditioner
M is given by the block factorization (17) in which M S is the approximation to S obtained by
iteratively solving system (22).
Thus, the block forward-backward solves with the factors (17) will amount to the following
three-step procedure.
1. Solve
2. Solve (iteratively) the system (22) to obtain
3. Compute Fy.
This three-step procedure translates into the following algorithm executed by Processor i.
Algorithm 3.2 Approximate-inverse Schur complement solution step with GMRES
1. Given: local right-hand side
2. Solve B i u approximately
3. Calculate the local right-hand side ~
4. Use GMRES to solve the distributed system M S i
5. Compute an approximation to t
6. Compute t.
Note that the steps in Lines 2, 5 and 6 do not require any communication among processors,
since matrix-vector operations in these steps are performed with the local vector components
only. In contrast, the solution of the global Schur system invoked in Line 4, involves global
matrix-vector multiplications with the "interface exchange matrix" consisting of all the interface
matrices X i . The approximate solution of can be carried out by several steps
of GMRES or by the forward-backward solves with incomplete L and U factors of B i (assuming
that a factorization
is available). Then an approximation ~ g i (Line 3) to the local
right-hand side of system (22) is calculated. In Line 5, there are several choices for approximating
It is possible to solve the linear system using GMRES as in Line 2. An
alternative is to exploit the matrix Y i that approximates B
in construction of M S i
(equation
(21)).
4 Numerical experiments
In the experiments, we compared the performance of the described preconditioners and the
distributed Additive Schwarz preconditioning (see, e.g., [19]) on 2-D elliptic PDE problems and
on several problems with the matrices from the Harwell-Boeing and Davis collections [7]. A
flexible variant of restarted GMRES (FGMRES) [16] has been used to solve the original system
since this accelerator permits a change in the preconditioning operation at each step. This is
useful when, for example, an iterative process is used to precondition the input system. Thus,
it is possible to use ILUT-preconditioned GMRES with lfil fill-in elements. Recall that ILUT
[17] is a form of incomplete LU factorization with a dual threshold strategy for dropping fill-in
elements.
For convenience, the following abbreviations will denote preconditioners and solution techniques
used in the numerical experiments:
Distributed approximate block LU factorization: M S i
are approximated
using the matrix Y i , constructed using the approximate-inverse
technique described in [4];
SAPINVS Distributed approximate block LU factorization: M S i
is approximated using
the approximate-inverse technique in [4], but B \Gamma1
applied using one
matrix-vector multiplication followed by a solve with
SLU Distributed global system preconditioning defined via an approximate solve
with M S , in which S i - L S i
BJ Approximate Additive Schwarz, where ILUT-preconditioned GMRES(k) is
used to precondition each submatrix assigned to a processor;
SI "pure" Schur complement iteration as described in Section 3.2.
4.1 Elliptic problems
Consider the elliptic partial differential equation
@x
a
@
@x
@y
@
@y
@x
@y
on rectangular regions with Dirichlet boundary conditions.
If there are n x points in the x direction and n y points in the y direction, then the mesh
is mapped to a virtual p x \Theta p y grid of processors, such that a subrectangle of n x =p x points in
the x direction and n y =p y points in the y direction belongs to a processor. In fact, each of the
subproblems associated with these subrectangles are generated in parallel. Figure 3 shows a
domain decomposition of a mesh and its mapping onto a virtual processor grid.
A comparison of timing results and iteration numbers for a global 360 \Theta 360 mesh mapped to
(virtual) square processor grids of increasing size is given in Figure 4. (In Figure 4, we omit the
solution time for the BJ preconditioning, which is 95.43 seconds). The residual norm reduction
by 10 \Gamma6 was achieved by flexible GMRES(10). In preconditioning, ILUT with lf and the
dropping tolerance 10 \Gamma4 was used as a choice of an incomplete LU factorization. Five iterations
Figure
3: Domain decomposition and assignment of a 12 \Theta 9 mesh to a 3 \Theta 3 virtual processor
grid.
of GMRES with the relative tolerance 10 \Gamma2 were used in the application of BJ and SLU. For
SAPINV, forward-backward solves with
were performed in Line 2 of Algorithm
3.2.
Since the problem (mesh) size is fixed, with increase in number of processors the subproblems
become smaller and the overall time decreases. Both preconditioners based on Schur complement
techniques are less expensive than the Additive Schwarz preconditioning. This is especially
noticeable for small numbers of processors.
Keeping subproblem sizes fixed while increasing the number of processors increases the over-all
size of the problem making it harder to solve and thus increasing the solution time. In
ideal situations of perfectly scalable algorithms, the execution time should remain constant.
Timing results for fixed local subproblem sizes of 15 \Theta 15, 30 \Theta 30, 50 \Theta 50, and 70 \Theta 70 are
presented in Figure 5. (Premature termination of the curves for SI indicates nonconvergence
in 300 iterations). The growth in the solution time as the number of processors increases is
rather pronounced for the "pure" Schur complement iteration and Additive Schwarz, whereas it
is rather moderate for the Schur complement-based preconditioners.
4.2 General problems
Table
describes three test problems from the Harwell-Boeing and Davis collections. The
column Pattern specifies whether a given problem has a structurally symmetric matrix. In all
three test problems, the matrix rows followed by the columns were scaled by 2-norm. Also, in
the partitioning of a problem one level of overlapping with data exchanging was used following
[13].
Tables
show iteration numbers required by FGMRES(20) with SAPINV, SAPINVS, SLU,
Processors
seconds
Solid line: BJ
Dash-dot line: SAPINV
Dash-star line: SLU
100100200300400Processors
Iterations
Iterations
Solid line: BJ
Dash-dot line: SAPINV
Dash-star line: SLU
Figure
4: Times and iteration counts for solving a 360 \Theta 360 discretized Laplacean problem with
3 different preconditioners using flexible GMRES(10).
Name n n z Pattern Discipline
calculation
raefsky1 3242 22316 Unsymm Incompressible flow
in pressure driven pipe
sherman3 5005 20033 Symm Oil reservoir modeling
Table
1: Description of test problems
and BJ till convergence on different numbers of processors. An asterisk indicates nonconvergence.
In the preconditioning phase, approximate solves in each processor were carried out by GMRES
to reduce the residual norm by 10 \Gamma3 but no more than for five steps were allowed. As a choice of
ILU factorization, ILUT with lfil fill-in elements (shown in the column lfil) was used in the
experiments here. lfil corresponds also to the number of elements in a matrix column created
by the approximate-inverse technique. In general, it is hard to compare the methods since the
number of fill-in elements in each of the resulting preconditioners is different. In other words, for
SAPINV and SAPINVS, lfil specifies the number of nonzeros in the blocks of preconditioning;
for SLU, lfil is the total number of nonzeros in the preconditioning, therefore, the number of
nonzeros in a given approximation S is not known exactly.
For a given problem, iteration counts for the SAPINV and SAPINVS suggest a clear trend
of achieving convergence in fewer iterations with increasing number of processors, which means
that a high degree of parallelism of these preconditioners does not impede convergence, and
may even enhance it significantly (cf. rows 1-4 of Table 2). The main explanation for this
is the fact that the approximations to the local and global Schur complement matrices from
0.40.81.21.6Processors
seconds
in each PE
Solid line: BJ
Dash-dot line: SAPINV
Dash-star line: SLU
Dash-circle line: SI
Processors
seconds
in each PE
Dash-dot line: SAPINV
Dash-star line: SLU
Dash-circle line: SI
Solid line: BJ
10051525Processors
seconds
50 x 50 Mesh in each PE
Solid line: BJ
Dash-dot line: SAPINV
Dash-star line: SLU
Dash-circle line: SI
10010305070Processors
seconds
in each PE
Solid line: BJ
Dash-dot line: SAPINV
Dash-star line: SLU
Dash-circle line: SI
Figure
5: Solution times for a Laplacean problem with various local subproblem sizes using
different preconditioners (BJ, SAPINV, SLU) and the Schur complement
iteration (SI).
Name Precon lfil 4 8
Table
2: Number of FGMRES(20) iterations for the RAEFSKY1 problem.
Name Precon lfil
28 43
Table
3: Number of FGMRES(20) iterations for the AF23560 problem.
Name Precon lfil
Table
4: Number of FGMRES(20) iterations for the SHERMAN3 problem.
which the global preconditioner M is derived actually improve as the processor numbers become
larger since these matrices become smaller. Furthermore, SAPINV, SAPINVS, and SLU do not
suffer from the information loss as happens with BJ (since BJ disregards the local matrix entries
corresponding to the external interface vector components). Note that the effectiveness of BJ
degrades with increasing number of processors (cf. Subsection 4.1). Comparison of SAPINV
and SAPINVS (for RAEFSKY1 and SHERMAN3) confirms the conclusions of [5] that using Y i
to approximate B
Algorithm 3.2) is more efficient than applying B
directly,
which is also computationally expensive. For general distributed matrices, this is especially true,
since iterative solves with B i may be very inaccurate.
In the experiments, sparse approximations of Y i appear to be quite accurate (usually reducing
the Frobenius norm to 10 \Gamma2 ), which could be attributed to the small dimensions of the matrices
used in approximations. This reduction in the Frobenius norm was attained in 10 iterations of
the MR method. Smaller numbers of iterations were also tested. Their effect on the overall
solution process amounted to on average one extra iteration of FGMRES(20) for the problems
considered here.
Conclusions
In this paper, several preconditioning techniques for distributed linear systems are derived from
approximate solutions with the related Schur complement system. The preconditioners are built
upon the already available distributed data structure for the original matrix, and an approximation
to the global Schur complement is never formed explicitly. Thus, no communication
overheard is incurred to construct a preconditioner, making the preprocessing phase simple and
highly parallel. The preconditioning operations utilize the communication structure precomputed
for the original matrix.
The preconditioning to the global matrix A is defined in terms of a block LU factorization
which involves a solve with the global Schur complement system at each preconditioning
step. This system is in turn solved approximately with a few steps of GMRES exploiting
approximations to the local Schur complement for preconditioning. Two different techniques,
incomplete LU factorization and approximate-inverse, are used to approximate these local Schur
complements. Distributed preconditioners constructed and applied in this manner allow much
flexibility in specifying approximations to the local Schur complements and local system solves
and in defining the global induced Block-LU preconditioner to the original matrix.
With an increasing number of processors, a Krylov subspace method, such as FGMRES [16],
preconditioned by the proposed techniques exhibits a very moderate growth in the execution
time for scaled problem sizes. Experiments show that the proposed distributed preconditioners
based on Schur complement techniques are superior to the commonly used Additive Schwarz
preconditioning. In addition, this advantage comes at no additional cost in code-complexity or
memory usage, since the same data structures as those for additive Schwarz preconditioners can
be used.
Acknowledgments
Computing resources for this work were provided by the Minnesota
Supercomputer Institute and the Virginia Tech Computing Center.
--R
PETSc 2.0 users manual.
A parallel algebraic non-overlapping domain decomposition method for flow problems
Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems.
Approximate inverse preconditioners for general sparse matrices.
Approximate inverse techniques for block-partitioned matrices
Approximate inverse preconditioning for sparse linear systems.
Sparse matrix test problems.
ParPre a parallel preconditioners package
A new approach to parallel preconditioning with sparse approximate inverses.
Aztec user's guide.
A comparison of domain decomposition techniques for elliptic partial differ ential equation and their parallel implementation.
Parallel solution of general sparse linear systems.
Iterative solution of general sparse linear systems on clusters of workstations.
Distributed ILU(0) and SOR preconditioners for unstructured sparse linear systems.
Krylov subspace methods in distributed computing environments.
A flexible inner-outer preconditioned GMRES algorithm
ILUT: a dual threshold incomplete ILU factorization.
Krylov subspace methods in distributed computing environments.
Iterative Methods for Sparse Linear Systems.
PSPARSLIB: A portable library of distributed memory sparse iterative solvers.
GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems.
Design of an iterative solution module for a parallel sparse matrix library (P SPARSLIB).
Domain decomposition: Parallel multilevel methods for elliptic partial differential equations.
--TR
--CTR
A. Basermann , U. Jaekel , M. Nordhausen , K. Hachiya, Parallel iterative solvers for sparse linear systems in circuit simulation, Future Generation Computer Systems, v.21 n.8, p.1275-1284, October 2005
Chi Shen , Jun Zhang, A fully parallel block independent set algorithm for distributed sparse matrices, Parallel Computing, v.29 n.11-12, p.1685-1699, November/December
M. Sosonkina , Y. Saad , X. Cai, Using the parallel algebraic recursive multilevel solver in modern physical applications, Future Generation Computer Systems, v.20 n.3, p.489-500, April 2004
Chi Shen , Jun Zhang, Parallel two level block ILU Preconditioning techniques for solving large sparse linear systems, Parallel Computing, v.28 n.10, p.1451-1475, October 2002
Chi Shen , Jun Zhang , Kai Wang, Distributed block independent set algorithms and parallel multilevel ILU preconditioners, Journal of Parallel and Distributed Computing, v.65 n.3, p.331-346, March 2005 | domain decomposition;parallel preconditioning;distributed sparse linear systems;schur complement techniques |
345249 | A Machine Learning Approach to POS Tagging. | We have applied the inductive learning of statistical decision trees and relaxation labeling to the Natural Language Processing (NLP) task of morphosyntactic disambiguation (Part Of Speech Tagging). The learning process is supervised and obtains a language model oriented to resolve POS ambiguities, consisting of a set of statistical decision trees expressing distribution of tags and words in some relevant contexts. The acquired decision trees have been directly used in a tagger that is both relatively simple and fast, and which has been tested and evaluated on the Wall Street Journal (WSJ) corpus with competitive accuracy. However, better results can be obtained by translating the trees into rules to feed a flexible relaxation labeling based tagger. In this direction we describe a tagger which is able to use information of any kind (n-grams, automatically acquired constraints, linguistically motivated manually written constraints, etc.), and in particular to incorporate the machine-learned decision trees. Simultaneously, we address the problem of tagging when only limited training material is available, which is crucial in any process of constructing, from scratch, an annotated corpus. We show that high levels of accuracy can be achieved with our system in this situation, and report some results obtained when using it to develop a 5.5 million words Spanish corpus from scratch. | Introduction
Part of Speech (pos) Tagging is a very basic and well known Natural Language
Processing (nlp) problem which consists of assigning to each word of a text the
proper morphosyntactic tag in its context of appearance. It is very useful for a
number of nlp applications: as a preprocessing step to syntactic parsing, in information
extraction and retrieval (e.g. document classification in internet searchers),
text to speech systems, corpus linguistics, etc.
The base of pos tagging is that many words being ambiguous regarding their
pos, in most cases they can be completely disambiguated by taking into account
an adequate context. For instance, in the sample sentence presented in table 1,
the word shot is disambiguated as a past participle because it is preceded by the
auxiliary was. Although in this case the word is disambiguated simply by looking
at the preceding tag, it must be taken into account that the preceding word could
be ambiguous, or that the necessary context could be much more complicated
than merely the preceding word. Furthermore, there are even cases in which the
ambiguity is non-resolvable using only morphosyntactic features of the context,
and require semantic and/or pragmatic knowledge.
Table
1. A sentence and its pos ambiguity -appearing tags, from the
Penn Treebank corpus, are described in appendix A-.
The DT first JJ time NN he PRP was VBD shot VBN in IN the DT
hand NN as IN he PRP chased VBD the DT robbers NNS outside RB .
first time shot in hand as chased outside
JJ NN NN IN NN IN JJ IN
VBN RP VBN NN
1.1. Existing Approaches to POS Tagging
Starting with the pioneer tagger Taggit (Greene & Rubin 1971), used for an initial
tagging of the Brown Corpus (bc), a lot of effort has been devoted to improving the
quality of the tagging process in terms of accuracy and efficiency. Existing taggers
can be classified into three main groups according to the kind of knowledge they
use: linguistic, statistic and machine-learning family. Of course some taggers are
difficult to classify into these classes and hybrid approaches must be considered.
Within the linguistic approach most systems codify the knowledge involved as a
set of rules (or constraints) written by linguists. The linguistic models range from
a few hundreds to several thousand rules, and they usually require years of labor.
The work of the Tosca group (Oostdijk 1991) and more recently the development
of Constraint Grammars in the Helsinki University (Karlsson et al. 1995) can be
considered the most important in this direction.
A MACHINE LEARNING APPROACH TO POS TAGGING 3
The most extended approach nowadays is the statistical family (obviously due
to the limited amount of human effort involved). Basically it consists of building
a statistical model of the language and using this model to disambiguate a word
sequence. The language model is coded as a set of co-occurrence frequencies for
different kinds of linguistic phenomena.
This statistical acquisition is usually found in the form of n-gram collection, that
is, the probability of a certain sequence of length n is estimated from its occurrences
in the training corpus.
In the case of pos tagging, usual models consist of tag bi-grams and tri-grams
(possible sequences of two or three consecutive tags, respectively). Once the n-gram
probabilities have been estimated, new examples can be tagged by selecting the tag
sequence with highest probability. This is roughly the technique followed by the
widespread Hidden Markov Model taggers. Although the form of the model and
the way of determining the sequence to be modeled can also be tackled in several
ways, most systems reduce the model to unigrams, bi-grams or tri-grams. The
seminal work in this direction is the Claws system (Garside et al. 1987), which
used bi-gram information and was the probabilistic version of Taggit. It was
later improved in (DeRose 1988) by using dynamic programming. The tagger by
(Church 1988) used a trigram model. Other taggers try to reduce the amount of
training data needed to estimate the model, and use the Baum-Welch re-estimation
algorithm (Baum 1972) to iteratively refine an initial model obtained from a small
hand-tagged corpus. This is the case of the Xerox tagger (Cutting et al. 1992)
and its successors. Those interested in the subject can find an excellent overview
in (Merialdo 1994).
Other works that can be placed in the statistical family are those of (Schmid 1994a)
which performs energy-function optimization using neural nets. Comparisons between
linguistic and statistic taggers can be found in (Chanod & Tapanainen 1995,
Other tasks are also approached through statistical methods. The speech recognition
field is very productive in this issue -actually, n-gram modelling was used
in speech recognition before being used in pos tagging-. Recent works in this
field try to not to limit the model to a fixed order n-gram by combining different
order n-grams, morphological information, long-distance n-grams, or triggering
pairs (Rosenfeld 1994, Ristad & Thomas 1996, Saul & Pereira 1997). These are
approaches that we may see incorporated to pos tagging tasks in the short term.
Although the statistic approach involves some kind of learning, supervised or un-
supervised, of the parameters of the model from a training corpus, we place in the
machine-learning family only those systems that include more sophisticated information
than a n-gram model. Brill's tagger (Brill 1992, Brill 1995) automatically
learns a set of transformation rules which best repair the errors committed by a
most-frequent-tag tagger, (Samuelsson et al. 1996) acquire Constraint Grammar
rules from tagged corpora, (Daelemans et al. 1996) apply instance-based learning,
and finally, the work that we present here -based on (M'arquez & Rodr'iguez 1997,
uses decision trees induced from tagged corpora,
and combines the learned knowledge in a hybrid approach consisting of applying
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
relaxation techniques over a set of constraints involving statistical, linguistic and
machine-learned information (Padr'o 1996, Padr'o 1998).
The accuracy reported by most statistic taggers surpasses 96-97% while linguistic
Constraint Grammars surpass 99% allowing a residual ambiguity of 1.026 tags per
word. These accuracy values are usually computed on a test corpus which has not
been used in the training phase. Some corpora commonly used as test benches are
the Brown Corpus, the Wall Street Journal (wsj) corpus and the British National
Corpus (bnc).
1.2. Motivation and Goals
Taking the above accuracy figures into account one may think that pos tagging
is a solved and closed problem this accuracy being perfectly acceptable for most
systems. So why waste time in designing yet another tagger? What does an
increase of 0.3% in accuracy really mean?
There are several reasons for thinking that there is still work to do in the field of
automatic pos tagging.
When processing huge running texts, and considering an average length per sentence
of 25-30 words, if we admit an error rate of 3-4% then it follows that, on
average, each sentence contains one error. Since pos tagging is a very basic task
in most nlp understanding systems, starting with an error in each sentence could
be a severe drawback, especially considering that the propagation of these errors
could grow more than linearly. Other nlp tasks that are very sensitive to pos
disambiguation errors can be found in the domain of Word Sense Disambiguation
(Wilks & Stevenson 1997) and Information Retrieval (Krovetz 1997).
Another issue refers to the need of adapting and tuning taggers that have acquired
(or learned) their parameters from a specific corpus onto another one -which may
contain texts from other domains- trying to minimize the cost of transportation.
The accuracy of taggers is usually measured against a test corpora of the same
characteristics as the corpus used for training. Nevertheless, no serious attempts
have been made to evaluate the accuracy of taggers corpora with different charac-
teristics, or even domain-specific.
Finally, some specific problems must be addressed when applying taggers to languages
other than English. In addition to the problems derived from the richer
morphology of some particular languages, there is a more general problem consisting
of the lack of large manually annotated corpora for training.
Although a bootstrapping approach can be carried out -using a low-accurate
tagger for producing annotated text that could be used then for retraining the
tagger and learning a more accurate model- the usefulness of this approach highly
relies on the quality of the retraining material. So, if we want to guarantee low noisy
retraining corpora, we have to provide methods able to achieve high accuracy, both
on known and unknown words, using a small high-quality training corpus.
In this direction, we are involved in a project for tagging Spanish and Catalan
corpora (over 5M words) with limited linguistic resources, that is, departing from
A MACHINE LEARNING APPROACH TO POS TAGGING 5
a manually tagged core of a size around 70,000 words. For the sake of comparability
we have included experiments performed over a reference corpus of English.
However, we also report the results obtained applying the presented techniques to
annotate the LexEsp Spanish corpus, proving that a very good accuracy can be
achieved at a fairly low human cost.
The paper is organized as follows: In section 2 we describe the application domain,
the language model learning algorithm and the model evaluation. In sections 3 and
4 we describe the language model application through two taggers: A decision tree
based tagger and a relaxation labelling based tagger, respectively. Comparative
results between them in the special case of using a small training corpus and the
joint use of both taggers to annotate a Spanish corpus are reported in section 5.
Finally, the main conclusions and an overview of the future work can be found in
section 6.
2. Language Model Acquisition
To enable a computer system to process natural language, it is required that language
is modeled in some way, that is, that the phenomena occurring in language
are characterized and captured, in such a way that they can be used to predict or
recognize future uses of language: (Rosenfeld 1994) defines language modeling as
the attempt to characterize, capture and exploit regularities in natural language,
and states that the need for language modeling arises from the great deal of variability
and uncertainty present in natural language.
As described in section 1, language models can be hand-written, statistically
derived, or machine-learned. In this paper we present the use of a machine-learned
model combined with statistically acquired models. A testimonial use of hand-written
models is also included.
2.1. Description of the Training Corpus and the Word Form Lexicon
We have used a portion of 1; 170; 000 words of the wsj, tagged according to the
Penn Treebank tag set, to train and test the system. Its most relevant features are
the following.
The tag set contains 45 different tags 1 . About 36:5% of the words in the corpus are
ambiguous, with an ambiguity ratio of 2:44 tags/word over the ambiguous words,
1:52 overall.
The corpus contains 243 different ambiguity classes, but they are not all equally
important. In fact, only the 40 most frequent ambiguity classes cover 83.95% of
the occurrences in the corpus, while the 194 most frequent cover almost all of them
(?99.50%).
The training corpus has also been used to create a word form lexicon -of 49,206
entries- with the associated lexical probabilities for each word. These probabilities
are estimated simply by counting the number of times each word appears in the
corpus with each different tag. This simple information provides a heuristic for
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
a very naive disambiguation algorithm which consists of choosing for each word
its most probable tag according to the lexical probability. Note that such a tagger
does not use any contextual information, but only the frequencies of isolated words.
Figure
1 shows the performance of this most-frequent-tag tagger (mft) on the wsj
domain for different sizes of the training corpus.
The reported figures refer to ambiguous words and they can be taken as a lower
bound for any tagger. More particularly, it is clear that for a training corpus bigger
than 400,000 words, the accuracy obtained is around 81-83%. However it is not
reasonable to think that it could be significantly raised simply by adding more
training corpus in order to estimate the lexical probabilities more effectively.657585
% of Training
%accuracy
MFT Tagger
Figure
1. Performance of the "most frequent tag" heuristic related to the training set size
Due to errors in corpus annotation, the resulting lexicon has a certain amount
of noise. In order to partially reduce this noise, the lexicon has been filtered by
manually checking the entries for the most frequent 200 words in the corpus -note
that the 200 most frequent words in the corpus represent over half of it-. For
instance the original lexicon entry (numbers indicate frequencies in the training
corpus) for the very common word the was:
the
since it appears in the corpus with the six different tags: CD (cardinal), DT (de-
(proper noun) and VBP (verb-personal
form). It is obvious that the only correct reading for the is determiner.
2.2. Learnig Algorithm
From a set of possible tags, choosing the proper syntactic tag for a word in a particular
context can be stated as a problem of classification. In this case, classes are identified
with tags. Decision trees, recently used in several nlp tasks, such as speech
recognition (Bahl 1989), POS tagging (Schmid 1994b, M'arquez & Rodr'iguez 1995,
Daelemans et al. 1996), parsing (McCarthy & Lehnert 1995, Magerman 1996),
A MACHINE LEARNING APPROACH TO POS TAGGING 7
sense disambiguation (Mooney 1996) and information extraction (Cardie 1994), are
suitable for performing this task.
2.2.1. Ambiguity Classes and Statistical Decision Trees It is possible to group
all the words appearing in the corpus according to the set of their possible tags (i.e.
adjective-noun, adjective-noun-verb, adverb-preposition, etc. We will call these
sets ambiguity classes. It is obvious that there is an inclusion relation between
these classes (i.e. all the words that can be adjective, noun and verb, can be, in
particular, adjective and noun), so the whole set of ambiguity classes is viewed as a
taxonomy with a dag structure. Figure 2 represents part of this taxonomy together
with the inclusion relation, extracted from the wsj.
2_ambiguity
4_ambiguity
JJ-NN-RB-VB
JJ-NN-RB-RP-VB
IN-JJ-NN-RB
JJ-NN-RB
JJ-NN-NP-RB JJ-NN-RB-UH
Figure
2. A part of the ambiguity-class taxonomy for the wsj corpus
In this way we split the general pos tagging problem into one classification problem
for each ambiguity class.
We identify some remarkable features of our domain, comparing with common
classification domains in Machine Learning field. Firstly, there is a very large
number of training examples: up to 60,000 examples for a single tree. Secondly,
there is quite a significant noise in both the training and test data: wsj corpus
contains about 2-3% of mistagged words.
The main consequence of the above characteristics, together with the fact that
simple context conditions cannot explain all ambiguities (Voutilainen 1994), is that
it is not possible to obtain trees to completely classify the training examples. In-
stead, we aspire to obtain more adjusted probability distributions of the words over
their possible tags, conditioned to the particular contexts of appearance. So we will
use Statistical decision trees, instead of common decision trees, for representing this
information.
The algorithm we used to construct the statistical decision trees is a non-incremental
supervised learning-from-examples algorithm of the tdidt (Top Down Induction
of Decision Trees) family. It constructs the trees in a top-down way, guided
by the distributional information of the examples (Quinlan 1993).
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
2.2.2. Training Set and Attributes For each ambiguity class a set of examples is
built by selecting from the training corpus all the occurrences of the words belonging
to this ambiguity class. The set of attributes that describe each example refer to the
part-of-speech tags of the neighbour words and to the orthography characteristics
of the word to be disambiguated. All of them are discrete attributes.
For the common ambiguity classes the set of attributes consists of a window
covering 3 tags to the left and 2 tags to the right -this size as well as the final set of
attributes has been determined on an empirical basis- and the word-form. Table 2
shows real examples from the training set for the words that can be preposition and
adverb (IN-RB ambiguity class).
Table
2. Training examples of the preposition-adverb ambiguity
class
tag \Gamma3 tag \Gamma2 tag \Gamma1 !word,tag? tag +1 tag +2
RB VBD IN !"after",IN? DT NNS
JJ NN NNS !"below",RB? VBP DT
A new set of orthographic features is incorporated in order to deal with a particular
ambiguity class, namely unknown words, that will be introduced in following
sections. See table 3 for a description of the whole set of attributes.
Table
3. List of considered attributes
Attribute Values Number of values
tag \Gamma3 any tag in the Penn Treebank 45
tag
tag
tag
tag
word form any word of the ambiguity class !847
first character any printable ASCII character !190
last character " "
other capital
has
Attributes with many values (i.e. the word-form and pre/suffix attributes used
when dealing with unknown words) are treated by dynamically adjusting the number
of values to the N most frequent, and joining the rest in a new otherwise value.
The maximum number of values is fixed at 45 (the number of different tags) in
order to have more homogeneous attributes.
A MACHINE LEARNING APPROACH TO POS TAGGING 9
2.2.3. Attribute Selection Function After testing several attribute selection functions
-including Quinlan's Gain Ratio (Quinlan 1986), Gini Diversity Index by
(Breiman et al. 1984), Relief-f (Kononenko 1994), - 2 Test, and Symmetrical Tau
1991)-, with no significant differences between them, we used an
attribute selection function proposed by (L'opez de M'antaras 1991), belonging to
the information-theory-based family, which showed a slightly higher stability than
the others and which is proved not to be biased towards attributes with many values
and capable of generating smaller trees, with no loss of accuracy, compared with
those of Quinlan's Gain Ratio (L'opez de M'antaras et al. 1996). Roughly speaking,
it defines a distance measurement between partitions and selects for branching the
attribute that generates the closest partition to the correct partition, namely the
one that perfectly classifies the training data.
Let X be a set of examples, C the set of classes and P C (X) the partition of X
according to the values of C. The selected attribute will be the one that generates
the closest partition of X to P C (X). For that we need to define a distance measurement
between partitions. Let PA (X) be the partition of X induced by the values
of attribute A. The average information of such partition is defined as follows:
where p(X; a) is the probability for an element of X belonging to the set a which
is the subset of X whose examples have a certain value for the attribute A, and
it is estimated by the ratio jX" aj
jXj . This average information measurement reflects
the randomness of distribution for the elements of X between the classes of the
partition induced by A. If we now consider the intersection between two different
partitions induced by attributes A and B we obtain:
Conditioned information of PB (X) given PA (X) is: I(PB (X)jPA
p(X;a)
It is easy to show that the measurement
is a distance. Normalizing, we obtain
with values in [0;1]. So, finally, the selected attribute will be that one that minimizes
the normalized distance: dN (P C (X); PA (X)).
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
2.2.4. Branching Strategy When dealing with discrete attributes, usual tdidt
algorithms consider a branch for each value of the selected attribute. However there
are other possibilities. For instance, some systems perform a previous recasting of
the attributes in order to have binary-valued attributes (Magerman 1996). The
motivation could be efficiency (dealing only with binary trees has certain advan-
tages), and avoiding excessive data fragmentation (when there is a large number of
values). Although this transformation of attributes is always possible, the resulting
attributes lose their intuition and direct interpretation, and explode in number. We
have chosen a mixed approach which consists of splitting for all values, and subsequently
joining the resulting subsets into groups for which we have insufficient
statistical evidence for there being different distributions. This statistical evidence
is tested with a - 2 test at a 95% confidence level, with a previous smoothing of
data in order to avoid zero probabilities.
2.2.5. Pruning the Tree In order to decrease the effect of over-fitting, we have
implemented a post pruning technique. In a first step the tree is completely expanded
and afterwards is pruned following a minimal cost-complexity criterion
(Breiman et al. 1984), using a comparatively small fresh part of the training set.
The alternative, of smoothing the conditional probability distributions of the leaves
using fresh corpus (Magerman 1996), has been left out because we also wanted to
reduce the size of the trees. Experimental tests have shown that in our domain,
the pruning process reduces tree sizes up to 50% and improves their accuracy by
2-5%.
2.2.6. An Example Finally, we present a real example of a decision tree branch
learned for the class IN-RB which has a clear linguistic interpretation.
word form
IN
1st right tag
IN
IN
2nd right tag
IN
"as" "As"
others
IN
others
others
Figure
3. Example of a decision tree branch
We can observe in figure 3 that each node in the path from the root to the
leaf contains a question on a concrete attribute and a probability distribution. In
the root it is the prior probability distribution of the class. In the other nodes it
represents the probability distribution conditioned to the answers to the questions
preceding the node. For example the second node says that the word as is more
commonly a preposition than an adverb, but the leaf says that the word as is
A MACHINE LEARNING APPROACH TO POS TAGGING 11
almost certainly an adverb when it occurs immediately before another adverb and
a preposition (this is the case of as much as, as well as, as soon as, etc.
3. TreeTagger: A Tree-Based Tagger
Using the model described in the previous section, we have implemented a reductionistic
tagger in the sense of Constraint Grammars (Karlsson et al. 1995). In a
initial step a word-form frequency dictionary constructed from the training corpus
provides each input word with all possible tags with their associated lexical
probability. After that, an iterative process reduces the ambiguity (discarding low
probable tags) at each step until a certain stopping criterion is satisfied. The whole
process is represented in figure 4. See also table 4 for the real process of disambiguation
of a part of the sentence presented in table 1.
Raw Text Classify Update Filter
Tagging Algorithm
Tree Base
Tagged Text
Tokenizer
Frequency lexicon
Language Model
Figure
4. Architecture of TreeTagger
More particularly, at each step and for each ambiguous word the work to be done
in parallel is:
1. Classify the word using the corresponding decision tree. The ambiguity of the
context (either left or right) during classification may generate multiple answers
for the questions of the nodes. In this case, all the paths are followed and the
result is taken as a weighted average of the results of all possible paths.
2. Use the resulting probability distribution to update the probability distribution
of the word (the updating of the probabilities is done by simply multiplying
previous probabilities per new probabilities coming from the tree).
3. Discard the tags with almost zero probability, that is, those with probabilities
lower than a certain discard boundary parameter.
After the stopping criterion is satisfied, some words could still remain ambigu-
ous. Then there are two possibilities: 1) Choose the most probable tag for each
still-ambiguous word to completely disambiguate the text. 2) Accept the residual
ambiguity (for successive treatment).
Note that a unique iteration forcing the complete disambiguation is equivalent
to use the trees directly as classifiers, and results in a very efficient tagger, while
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
Table
4. Example of disambiguation
. as he chased the robbers outside .
it.1 IN:0.96 PRP:1 VBD:0.97 DT:1 NNS:1 IN:0.01 .:1
it.2 IN:1 PRP:1 VBD:1 DT:1 NNS:1 RB:1 .:1
stop
performing several steps progressively reduces the efficiency, but takes advantage
of the statistical nature of the trees to get more accurate results.
Another important point is to determine an appropriate stopping criterion -since
the procedure is heuristics, the convergence is not guaranteed, however this is not
the case in our experiments-. First experiments seem to indicate that the performance
increases up to a unique maximum and then softly decreases as the number
of iterations increases. This phenomenon is studied in (Padr'o 1998) and the noise
in the training and test sets is suggested to be the major cause. For the sake of sim-
plicity, in the experiments reported in following sections, the number of iterations
was experimentally fixed to three. Although it might seem an arbitrary decision,
broad-ranging experiments performed seem to indicate that this value results in a
good average tagging performance in terms of accuracy and efficiency.
3.1. Using TreeTagger
We divided the wsj corpus in two parts: words were used as a train-
ing/pruning set, and 50; 000 words as a fresh test set. We used a lexicon -described
in section 2.1- derived from training corpus, containing all possible tags for each
word, as well as their lexical probabilities. For the words in the test corpus not
appearing in the training set, we stored all the tags that these words have in the
test corpus, but no lexical probability (i.e. assigning uniform distribution). This
approach corresponds to the assumption of having a morphological analyzer that
provides all possible tags for unknown words. In following experiments we will treat
unknown words in a less informed way.
From the 243 ambiguity classes the acquisition algorithm learned a base of 194
trees (covering 99.5% of the ambiguous words) and requiring about 500 Kb of
storage. The learning algorithm (in a Common Lisp implementation) took about
cpu-hours running on a Sun SparcStation-10 with 64Mb of primary memory.
The first four columns of table 5 contain information about the trees learned for the
ten most representative ambiguity classes. They present figures about the number
of examples used for learning each tree, their number of nodes and the estimation
of their error rate when tested on a sample of new examples. This last figure could
A MACHINE LEARNING APPROACH TO POS TAGGING 13
be taken as a rough estimation of the error of the trees when used in TreeTagger,
though it is not exactly true, since here learning examples are fully disambiguated
in their context, while during tagging both contexts -left and right- can be
ambiguous.
Table
5. Tree information and number and percentages of error for the most
difficult ambiguity classes
Amb. class #exs #nodes %error tt-errors(%) mft-errors(%)
JJ-VBD-VBN 11,346 761 18.75% 95 (16.70%) 180 (31.64%)
JJ-NN 16,922 680 16.30% 122 (14.01%) 144 (16.54%)
NNS-VBZ 15,233 688 4.37% 44 (6.19%) 81 (11.40%)
JJ-RB 8,650 854 11.20% 48 (10.84%) 73 (16.49%)
Total 179,601 5,871 787 1,806
The tagging algorithm, running on a Sun UltraSparc2, processed the test set at
a speed of ?300 words/sec. The results obtained can be seen at a different levels
of granularity.
ffl The performance of some of the learned trees is shown in last two columns of
table 5. The corresponding ambiguity classes concentrate the 62.5% of the errors
committed by a most-frequent-tag tagger (mft column). tt column shows the
number and percentage of errors committed by our tagger. On the one hand
we can observe a remarkable reduction in the number of errors (56:4%). On the
other hand it is useful to identify some problematic cases. For instance, JJ-NN
seems to be the most difficult ambiguity class, since the associated tree obtains
only a slight error reduction from the mft baseline tagger (15.3%) -this is not
surprising since semantic knowledge is necessary to fully disambiguate between
noun and adjective-. Results for the DT-IN-RB-WDT ambiguity reflect an over-estimation
of the generalization performance of the tree -predicted error rate
(6.07%) is much lower than the real (12.08%)-. This may be indicating a
problem of over pruning.
ffl Global results are the following: when forcing a complete disambiguation the
resulting accuracy was 97:29%, while accepting residual ambiguity the accuracy
rate increased up to 98:22%, with an ambiguity ratio of 1.08 tags/word over the
ambiguous words and 1.026 tags/word overall. In other words, 2.75% of the
words remained ambiguous (over 96% of them retaining only 2 tags).
In (M'arquez & Rodr'iguez 1997) it is shown that these results are as good (and
better in some cases) as the results of a number of the non-linguistically motivated
state-of-the-art taggers.
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
In addition, we present in figure 5 the performance achieved by our tagger with
increasing sizes of the training corpus. Results in accuracy are computed over all
words. The same figure includes mft results, which can be seen as a lower bound.9092949698
%accuracy
TreeTagger
MFT Tagger
Figure
5. Performance of the tagger related to the training set size
Following the intuition, we see that performance grows as the training set size
grows. The maximum is at 97.29%, as previously indicated.
One way to easily evaluate the quality of the class-probability estimates given
by a classifier is to calculate a rejection curve. That is to plot a curve showing
the percentage of correctly classified test cases whose confidence level exceeds a
given value. In the case of statistical decision trees this confidence level can be
straightforwardly computed from the class probabilities given by leaves of the trees.
In our case we calculate the confidence level as the difference in probability between
the two most probable cases (if this difference is large, then the chosen class is clearly
much better than the others; if the difference is small, then the chosen class is nearly
tied with another class). A rejection curve that increases smoothly, indicates that
the confidence level produced by the classifier can be transformed into an accurate
probability measurement.
The rejection curve for our classifier, included in figure 6, increases fairly smoothly,
giving the idea that that the acquired statistical decision trees provide good confidence
estimates. This is in close connection with the aforementioned positive results
of the tagger when disambiguation in the low-confidence cases is not required.
3.2. Unknown words
Unknown words are those words not present in the lexicon (i.e. in our case, the
words not present in the training corpus). In the previous experiments we have not
considered the possibility of unknown words. Instead we have assumed a morphological
analyzer providing the set of possible tags with a uniform probability dis-
tribution. However, this is not the most realistic scenario. Firstly, a morphological
analyzer is not always present (due to the morphological simplicity of the treated
A MACHINE LEARNING APPROACH TO POS
rejection
accuracy
Figure
6. Rejection curve for the trees acquired with full training set
language, the existence of some efficiency requirements, or simply the lack of re-
sources). Secondly, if it is available, it very probably has a certain error rate that
makes it necessary to considered the noise it introduces. So it seems clear that we
have to deal with unknown words in order to obtain more realistic figures about
the real performance of our tagger.
There are several approaches to dealing with unknown words. On the one hand,
one can assume that unknown words may potentially take any tag, excluding those
tags corresponding to closed categories (preposition, determiner, etc.), and try to
disambiguate between them. On the other hand, other approaches include a pre-process
that tries to guess the set of candidate tags for each unknown word to feed
the tagger with this information. See (Padr'o 1998) for a detailed explanation of
the methods.
In our case, we consider unknown words as words belonging to the ambiguity
class containing all possible tags corresponding to open categories (i.e. noun, proper
noun, verb, adjective, adverb, cardinal, etc. The number of candidate tags come to
20, so we state a classification problem with 20 different classes. We have estimated
the proportion of each of these tags appearing naturally in the wsj as unknown
words and we have collected the examples from the training corpus according to
these proportions. The most frequent tag, NNP (proper noun), represents almost
30% of the sample. This fact establishes a lower bound for accuracy of 30% in this
domain (i.e. the performance that a most-frequent-tag tagger would obtain).
We have used very simple information about the orthography and the context
of unknown words in order to improve these results. In particular, from an initial
set of 17 potential attributes, we have empirically decided the most relevant, which
turned out to be the following: 1) In reference to word form: the first letter, the last
three letters, and other four binary-valued attributes accounting for capitalization,
whether the word is a multi-word or not, and for the existence of some numeric
characters in the word. 2) In reference to context: only the preceding and the
following pos tags. This set of attributes is fully described in table 3.
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
Table
6 shows the generalization performance of the trees learned from training
sets of increasing sizes up to 50,000 words. In order to compare these figures with a
close approach we have implemented Igtree system (Daelemans et al. 1996) and
we have tested its performance exactly under the same conditions as ours.
Igtree system is a memory-based pos tagger which stores in memory the whole
set of training examples and then predicts the part of speech tags for new words in
particular contexts by extrapolation from the most similar cases held in memory
(k-nearest neighbour retrieval algorithm). The main connection point to the work
presented here is that huge example bases are indexed using a tree-based formalism,
and that the retrieval algorithm is performed by using the generated trees as classi-
fiers. Additionally, these trees are constructed on the base of a previous weighting
of attributes (contextual and orthographic attributes used for disambiguating are
very similar to ours) using Quinlan's Information Ratio (Quinlan 1986).
Note that the final pruning step applied by Igtree to increase the compression
factor even more has also been implemented in our version. The results of Igtree
are also included in table 6. Figures 7 and 8 contain the plots corresponding to the
same results.
Table
6. Generalization performance of the trees for
unknown words
TreeTagger Igtree
#exs. accuracy(#nodes) accuracy(#nodes)
2,000 77.53% (224) 70.36% (627)
5,000 80.90% (520) 76.33% (1438)
10,000 83.30% (1112) 79.18% (2664)
20,000 85.82% (1644) 82.30% (4783)
30,000 87.32% (2476) 85.11% (6477)
50,000 88.12% (4056) 87.14% (9554)
Observe that our system produces better quality trees than those of Igtree -we
measure this quality in terms of generalization performance (how well these trees fit
new examples) and size (number of nodes)-. On the one hand, we see in figure 7
that our generalization performance is better. On the other hand, figure 8 seems
to indicate that the growing factor in the number of nodes is linear in both cases,
but clearly lower in ours.
Important aspects contributing to the lower size are the merging of attribute
values and the post pruning process applied in our algorithm. Experimental results
showed that the tree size is reduced by up to 50% on average without loss in
accuracy (M'arquez 1998).
The better performance is probably due to the fact that Igtrees are not actually
decision trees (in the sense of trees acquired by a supervised algorithm of top-down
induction, that use a certain attribute selection function to decide at each step
which is the attribute that best contributes to discriminate between the current
set of examples), but only a tree-based compression of a base of examples inside
A MACHINE LEARNING APPROACH TO POS TAGGING
%accuracy
TreeTagger
IGTree
Figure
7. Accuracy vs. training set size for unknown words
a kind of weighted nearest-neighbour retrieval algorithm. The representation and
the weighting of attributes allows us to think of Igtrees as the decision trees that
would be obtained by applying the usual top-down induction algorithm with a very
naive attribute selection function consisting of making a previous unique ranking of
attributes using Quinlan's Information Ratio over all examples and later selecting
the attributes according to this ordering. Again, experimental results show that it
is better to reconsider the selection of attributes at each step than to decide on an
a priori fixed order (M'arquez 1998).2006001000
#nodes
TreeTagger
IGTree
Figure
8. Number of nodes of the trees for unknown words
Of course, these conclusions have to be taken in the domain of small training sets
-the same plot in figure 8 suggests that the difference between the two methods
decreases as the training set size increases-. Using bigger corpora for training
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
might improve performance significantly. For instance, (Daelemans et al. 1996)
report an accuracy rate of 90.6% on unknown words when training with the whole
wsj (2 million words). So our results can be considered better than theirs in the
sense that our system needs less resources for achieving the same performance.
Note that the same result holds when using the whole training set: Daelemans et
al. report a tagging accuracy of 96.4%, training with a 2Mwords training set, while
our results, slightly over 97%, were achieved using only 1.2Mwords 2 .
4. Relax: A Relaxation Labelling Based Tagger
Up to now we have described a decision-tree acquisition algorithm used to automatically
obtain a language model for pos tagging, and a classification algorithm
which uses the obtained model to disambiguate fresh texts.
Once the language model has been acquired, it would be useful that it could be
used by different systems and extended with new knowledge. In this section we
will describe a flexible tagger based on relaxation labelling methods, which enables
the use of models coming from different sources, as well as their combination and
cooperation.
Algorithm
Bigrams
Trigrams
Manually
constraints
Tagged Corpus
Labelling
constraints
Tree-based
Relaxation
Raw Corpus
Lexicon
Language Model
Tagging Algorithm
Figure
9. Architecture of Relax tagger
The tagger we present has the architecture described in figure 9: A unique algorithm
uses a language model consisting of constraints obtained from different
knowledge sources.
Relaxation is a generic name for a family of iterative algorithms which perform
function optimization, based on local information. They are closely related to
neural nets (Torras 1989) and gradient step (Larrosa & Meseguer 1995b).
Although relaxation operations had long been used in engineering fields to solve
systems of equations (Southwell 1940), they did not achieve their breakthrough
success until relaxation labelling -their extension to the symbolic domain- was
applied by (Waltz 1975, Rosenfeld et al. 1976) to constraint propagation field, especially
in low-level vision problems.
Relaxation labelling is a technique that can be used to solve consistent labelling
problems (clps) -see (Larrosa & Meseguer 1995a)-. A consistent labelling probA
lem consists of, given a set of variables, assigning to each variable a value compatible
with the values of the other ones, satisfying -to the maximum possible extent- a
set of compatibility constraints.
In the Artificial Intelligence field, relaxation has been mainly used in computer
vision -since it is where it was first used- to address problems such as corner and
edge recognition or line and image smoothing (Richards et al. 1981, Lloyd 1983).
Nevertheless, many traditional AI problems can be stated as a labelling prob-
lem: the traveling salesman problem, n-queens, or any other combinatorial problem
(Aarts & Korst 1987).
The utility of the algorithm to perform nlp tasks was pointed out in the work
by (Pelillo & Refice 1994, Pelillo & Maffione 1994), where pos tagging was used
as a toy problem to test some methods to improve the computation of constraint
compatibility coefficients for relaxation processes. Nevertheless, the first application
to a real nlp problems, on unrestricted text is the work presented in (Padr'o 1996,
Voutilainen & Padr'o 1997, M`arquez & Padr'o 1997, Padr'o 1998).
From our point of view, the most remarkable feature of the algorithm is that,
since it deals with context constraints, the model it uses can be improved by writing
into the constraint formalism any available knowledge. The constraints used
may come from different sources: statistical acquisition, machine-learned models or
hand coding. An additional advantage is that the tagging algorithm is independent
of the complexity of the model.
4.1. The Algorithm
Although in this section the relaxation algorithm is described from a general point
of view, its application to pos tagging is straightforwardly performed, considering
each word as a variable and each of its possible pos tags as a label.
be a set of variables (words).
g be the set of possible labels (pos tags) for variable v i
(where m i is the number of different labels that are possible for v i ).
Let C be a set of constraints between the labels of the variables. Each constraint
is a compatibility value for a combination of pairs variable-label.
binary constraint (e.g. bi-gram)
ternary constraint (e.g. tri-gram)
The first constraint states that the combination of variable v 1 having label A,
and variable v 3 having label B, has a compatibility value of 0:53. Similarly, the
second constraint states the compatibility value for the three pairs variable-value
it contains.
Constraints can be of any order, so we can define the compatibility value for
combinations of any number of variables.
The aim of the algorithm is to find a weighted labelling such that global consistency
is maximized.
A weighted labelling is a weight assignment for each possible label of each variable:
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
is a vector containing a weight for each
possible label of v i , that is: p
Since relaxation is an iterative process, the weights vary in time. We will note
the weight for label j of variable i at time step n as p i
j (n), or simply p i
j when the
time step is not relevant.
Maximizing global consistency is defined as maximizing for each variable v i , (1 -
the average support for that variable, which is defined as the weighted sum
of the support received by each of its possible labels, that is:
ij is the support received by that pair from the context.
The support for a pair variable-label how compatible is the assignation
of label j to variable i with the labels of neighbouring variables, according
to the constraint set.
Although several support functions may be used, we chose the following one,
which defines the support as the sum of the influence of every constraint on a label.
Inf(r)
are defined as follows:
R ij is the set of constraints on label j for variable i, i.e. the constraints formed
by any combination of variable-label pairs that includes the pair (v
k1
kd (m), is the product of the current weights for
the labels appearing in the constraint except (v
(representing how applicable
the constraint is in the current context) multiplied by C r which is the constraint
compatibility value (stating how compatible the pair is with the context).
Although the C r compatibility values for each constraint may be computed in
different ways, performed experiments (Padr'o 1996, Padr'o 1998) point out that the
best results in our case are obtained when computing compatibilities as the mutual
information between the tag and the context (Cover & Thomas 1991). Mutual
information measures how informative is an event with respect to another, and is
computed as
If A and B are independent events, the conditional probability of A given B will
be equal to the marginal probability of A and the measurement will be zero. If
the conditional probability is larger, it means than the two events tend to appear
together more often than they would by chance, and the measurement yields a
positive number. Inversely, if the conditional occurrence is scarcer than chance,
the measurement is negative. Although Mutual information is a simple and useful
way to assign compatibility values to our constraints, a promising possibility still to
be explored is assigning them by Maximum Entropy Estimation (Rosenfeld 1994,
The pseudo-code for the relaxation algorithm can be found in table 7. It consists
of the following steps:
A MACHINE LEARNING APPROACH TO POS TAGGING 21
1. Start in a random labelling P 0 . In our case, we select a better-informed starting
point, which are the lexical probabilities for each word tag.
2. For each variable, compute the support that each label receives from the current
weights from other variable labels (i.e. see how compatible is the current
weighting with the current weightings of the other variables, given the set of
constraints).
3. Update the weight of each variable label according to the support obtained
by each of them (that is, increase weight for labels with high support -greater
than zero-, and decrease weight for those with low support -less than zero-).
The chosen updating function in our case was:
4. iterate the process until a convergence criterion is met. The usual criterion is
to wait for no significant changes.
The support computing and weight changing must be performed in parallel, to
avoid that changing a weight for a label would affect the support computation of
the others.
We could summarize this algorithm by saying that at each time-step, a variable
changes its label weights depending on how compatible is that label with the labels
of the other variables at that time-step. If the constraints are consistent, this
process converges to a state where each variable has weight 1 for one of its labels
and weight 0 for all the others.
The performed global consistency maximization is a vector optimization. It does
not maximize -as one might think- the sum of the supports of all variables, but it
finds a weighted labelling such that any other choice would not increase the support
for any variable, given -of course- that such a labelling exists. If such a labelling
does not exist, the algorithm will end in a local maximum.
Note that this global consistency idea makes the algorithm robust: The problem
of having mutually incompatible constraints (there is no combination of label
assignations which satisfies all the constraints) is solved because relaxation does
not necessarily find an exclusive combination of labels -i.e. a unique label for each
variable-, but a weight for each possible label such that constraints are satisfied to
the maximum possible degree. This is especially useful in our case, since constraints
will be automatically acquired, and different knowledge sources will be combined,
so constraints might not be fully consistent.
The advantages of the algorithm are:
ffl Its highly local character (each variable can compute its new label weights
given only the state at previous time-step). This makes the algorithm highly
parallelizable (we could have a processor to compute the new label weights for
22 LLU' iS M '
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
1.
2. repeat
3. for each variable v i
4. for each t j possible label for v i
Inf(r)
6. end for
7. for each t j possible label for v i
8.
9. end for
10. end for
11. until no more changes
Table
7. Pseudo code of the relaxation labelling algorithm.
each variable, or even a processor to compute the weight for each label of each
variable).
Its expressiveness, since we state the problem in terms of constraints between
variable labels. In our case, this enables us to use binary (bigram) or ternary
(trigram) constraints, as well as more sophisticated constraints (decision tree
branches or hand-written constraints).
Its flexibility, we do not have to check absolute consistency of constraints.
Its robustness, since it can give an answer to problems without an exact solution
(incompatible constraints, insufficient data, . )
ffl Its ability to find local-optima solutions to np problems in a non-exponential
time (only if we have an upper bound for the number of iterations, i.e. convergence
is fast or the algorithm is stopped after a fixed number of iterations).
The drawbacks of the algorithm are:
Its cost. N being the number of variables, v the average number of possible
labels per variable, c the average number of constraints per label, and I the
average number of iterations until convergence, the average cost is N \Theta v \Theta c \Theta I,
that is, it depends linearly on N , but for a problem with many labels and
constraints, or if convergence is not quickly achieved, the multiplying terms
might be much bigger than N . In our application to pos tagging, the bottleneck
A MACHINE LEARNING APPROACH TO POS TAGGING 23
is the number of constraints -which may be several thousand-. The average
number of tags per ambiguous word is about 2.5, and an average sentence
contains about 10 ambiguous words.
ffl Since it acts as an approximation of gradient step algorithms, it has their typical
convergence problems: Found optima are local, and convergence is not guaran-
teed, since the chosen step might be too large for the function to optimize.
ffl In general relaxation labelling applications, constraints would be written manu-
ally, since they are the modeling of the problem. This is good for easy-to-model
domains or reduced constraint-set problems, but in the case of pos tagging,
constraints are too many and too complicated to be easily written by hand.
ffl The difficulty of stating by hand what the compatibility value is for each con-
straint. If we deal with combinatorial problems with an exact solution (e.g.
traveling salesman), the constraints will be either fully compatible (e.g. stating
that it is possible to go to any city from any other), fully incompatible (e.g.
stating that it is not possible to be twice in the same city), or will have a value
straightforwardly derived from the distance between cities. But if we try to
model more sophisticated or less exact problems (such as pos tagging), we will
have to establish a way of assigning graded compatibility values to constraints.
As mentioned above, we will be using Mutual Information.
ffl The difficulty of choosing the most suitable support and updating functions for
each particular problem.
4.2. Using Machine-Learned Constraints
In order to feed the Relax tagger with the language model acquired by the
decision-tree learning algorithm, the group of the 44 most representative trees
(covering 83.95% of the examples) were translated into a set of weighted context
constraints. Relax was fed not only with that constraints, but also with bi/tri-
gram information.
The Constraint Grammars formalism (Karlsson et al. 1995) was used to code the
tree branches. CG is a widespread formalism used to write context constraints.
Since it is able to represent any kind of context pattern, we will use it to represent
all our constraints, n-gram patterns, hand-written constraints, or decision-tree
branches.
Since the CG formalism is intended for linguistic uses, the statistical contribution
has no place in it: Constraints can state only full compatibility (constraints that
select a particular reading) or full incompatibility (constraints that remove a
particular reading). Thus, we slightly extended the formalism to enable the use
of real-valued compatibilities, in such a way that constraints are not assigned a
REMOVE/SELECT command, but a real number indicating the constraint compatibility
value, which -as described in section 4.1- was computed as the mutual
information between the focus tag and the context.
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
The translation of bi/tri-grams to context constraints is straightforward. A left
prediction bigram and its right prediction counterpart would be:
The training corpus contains 1404 different bigrams. Since they are used both for
left and right prediction, they are converted in 2808 binary constraints.
A trigram may be used in three possible ways (i.e. the abc trigram pattern
generates the constraints: c, given it is preceded by ab; a, given it is followed by
bc; and b, given it is preceded by a and followed by c):
2:16 (VB) 1:54 (NN) 1:82 (DT)
The 17387 trigram patterns in the training corpus produce 52161 ternary constraints
The usual way of expressing trees as a set of rules was used to construct the context
constraints. For instance, the tree branch represented in figure 3 was translated
into the two following constraints:
\Gamma5:81 (IN) 2:366 (RB)
(0 "as" "As") (0 "as" "As")
which express the compatibility (either positive or negative) of the tag in the first
line with the given context (i.e. the focus word is "as", the first word to the
right has tag RB and the second has tag IN). The decision trees acquired for the 44
most frequent ambiguity classes result in a set of 8473 constraints.
The main advantage of Relax is its ability to deal with constraints of any kind.
This enables us to combine statistical n-grams (written in the form of constraints)
with the learned decision tree models, and even with linguistically motivated hand-written
constraints, such as the following,
which states a high compatibility value for a VBN (participle) tag when preceded
by an auxiliary verb, provided that there is no other participle, adjective nor any
phrase change in between.
Since the cost of the algorithm depends linearly on the number of constraints,
the use of the trigram constraints (either alone or combined with the others) makes
the disambiguation about six times slower than when using bc, and about 20 times
slower than when using only b.
A MACHINE LEARNING APPROACH TO POS TAGGING 25
The obtained results for the different knowledge combination are shown in table
8. The results produced by two baseline taggers -mft: most-frequent-tag tag-
ger, hmm: bi-gram Hidden Markov Model tagger by (Elworthy 1993)- are also
reported. b stands for bi-grams, t for trigrams, and c for the constraints acquired
by the decision tree learning algorithm. Results using a sample of 20 linguistically-motivated
constraints (h) can be found in table 9.
Those results show that the addition of the automatically acquired context constraints
led to an improvement in the accuracy of the tagger, overcoming the bi/tri-
gram models and properly cooperating with them. See (M'arquez & Padr'o 1997)
for more details on the experiments and comparisons with other current taggers.
Table
8. Results of baseline tagger and of the Relax tagger using every combination of constraint
kinds
ambiguous 85:31% 91:75% 91:35% 91:82% 91:92% 91:96% 92:72% 92:82% 92:55%
overall 94:66% 97:00% 96:86% 97:03% 97:06% 97:08% 97:36% 97:39% 97:29%
Table
9. Results of our tagger using every combination of constraint kinds plus hand
written constraints
h bh th bth ch bch tch btch
ambiguous 86:41% 91:88% 92:04% 92:32% 91:97% 92:76% 92:98% 92:71%
overall 95:06% 97:05% 97:11% 97:21% 97:08% 97:37% 97:45% 97:35%
Figure
shows the 95% confidence intervals for the results in table 8. The main
conclusions that can be drawn from those data are described below.
ffl Relax is slightly worse than the HMM tagger when using the same information
(bi-grams). This may be due to a higher sensitivity to noise in the training
corpus.
ffl There are two significantly distinct groups: Those using only statistical infor-
mation, and those using statistical information plus the decision trees model.
The n-gram models and the learned model belong to the first group, and any
combination of a statistical model with the acquired constraint belongs to the
second group.
ffl Although the hand-written constraints improve the accuracy of any model,
the size of the linguistic constraint set is too small to make this improvement
statistically significant.
ffl The combination of the two kinds of model produces significantly better results
than any separate use. This indicates that each model contains information
which was not included in the other, and that relaxation labelling combines
them properly.
26 LLU' iS M '
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZHMMBTBTCBCTCBTCTCH
96.50 97.00 97.50
97.00 97.50
Figure
10. 95% confidence intervals for the Relax tagger results
5. Using Small Training Sets
In this section we will discuss the results obtained when using the two taggers
described above to apply the language models learned from small training corpus.
The motivation for this analysis is the need for determining the behavior of our
taggers when used with language models coming from scarce training data, in order
to best exploit them for the development of Spanish and Catalan tagged corpora
starting from scratch.
5.1. Testing Performance on WSJ
In particular we used 50,000 words of the wsj corpus to automatically derive a
set of decision trees and collect bi-gram statistics. Tri-gram statistics were not
considered since the size of the training corpus was not large enough to reasonably
estimate the big number of parameters for the model -note that a 45-tag tag set
produces a trigram model of over 90,000 parameters, which obviously cannot be
estimated from a set of 50,000 occurrences-.
Using this training set the learning algorithm was able to reliably acquire over
trees representing the most frequent ambiguity classes -note that the training
data was insufficient for learning sensible trees for about 150 ambiguity classes-.
Following the formalism described in the previous section, we translated these trees
into a set of about 4,000 constraints to feed the relaxation labelling algorithm.
The results in table 10 are computed as the average of ten experiments using
randomly chosen training sets of 50,000 words each. b stands for the bi-gram
A MACHINE LEARNING APPROACH TO POS TAGGING 27
Table
10. Comparative results using different models acquired from small training
corpus
mft TreeTagger Relax(c) Relax(b) Relax(bc)
ambiguous 75.35% 87.29% 86.29% 87.50% 88.56%
overall 91.64% 95.69% 95.35% 95.76% 96.12%Tree-based (C)Relax (C)Relax (B)Relax (BC)
Figure
11. 95% confidence intervals for both tagger results
model and c for the learned decision tree (either in the form of trees or translated
to constraints). The corresponding confidence intervals can be found in figure 11.
The presented figures point out the following conclusions:
ffl We think this result is quite accurate. In order to corroborate this statement
we can compare our accuracy of 96.12% with the 96.0% reported by
(Daelemans et al. 1996) for the Igtree Tagger trained with a double size corpus
(100 Kw).
ffl TreeTagger yields a higher performance than the Relax tagger when both
use only the c model. This is caused by the fact that, due to the scarceness
of the data, a significant amount of test cases do not match any complete
tree branch, and thus TreeTagger uses some intermediate node probabilities.
Since only complete branches are translated to constrains -partial branches
were not used to avoid excessive growth in the number of constraints-, the
tagger does not use intermediate node information and produces lower
results. A more exhaustive translation of tree information into constraints is an
issue that should be studied in the short run.
ffl The Relax tagger using the b model produces better results than any of the
taggers when using the c model alone. The cause of this is related with the
aforementioned problem of estimating a big number of parameters with a small
sample. Since the model consists of six features, the number of parameters to
28 LLU' iS M '
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
be learned is still larger than in the case of tri-grams, thus the estimation is
not as complete as it could be.
ffl The Relax tagger using the bc model produces better results (statistically
significant at a 95% confidence level) than any other combination. This suggests
that, although the tree model is not complete enough on its own, it contains
different information than the bi-gram model. Moreover this information is
proved to be very useful when combined with the b model by Relax.
5.2. Tagging the LexEsp Spanish Corpus
The LexEsp Project is a multi-disciplinary effort headed by the Psychology Department
at the University of Oviedo. It aims to create a large database of language
usage in order to enable and encourage research activities in a wide range of fields,
from linguistics to medicine, through psychology and artificial intelligence, among
others. One of the main issues of this database of linguistic resources is the LexEsp
corpus, which contains 5.5 Mw of written material, including general news, sports
news, literature, scientific articles, etc.
The corpus has been morphologically analyzed with the maco+ system, a fast,
broad-coverage analyzer (Carmona et al. 1998). The tagset contains 62 tags. The
percentage of ambiguous words is 39.26% and the average ambiguity ratio is 2.63
tags/word for the ambiguous words (1.64 overall).
From this material, 95 Kw were hand-disambiguated to get an initial training
set of 70 Kw and a test set of 25 Kw. To automatically disambiguate the rest
of the corpus, we applied a bootstrapping method taking advantage of the use
of both taggers. The procedure applied starts by using the small hand-tagged
portion of the corpus as an initial training set. After that, both taggers are used to
disambiguate further fresh material. The tagger agreement cases of this material
are used to enlarge the language model, incorporating it to the training set and
retraining both taggers. This procedure could be iterated to obtain progressively
better language models.
The point here is that the cases in which both taggers coincide present a higher
accuracy, and thus can be used as new retraining set with a lower error rate than
that obtained using a single tagger. For instance, using a single tagger trained with
the hand-disambiguated training set, we can tag 200,000 fresh words and use them
to retrain the tagger. In our case, the best tagger would tag this new set with 97.4%
accuracy. Merging this result with the previous hand-disambiguated set, we would
obtain a 270Kw corpus with an error rate of 1.9%. On the other hand, given that
both taggers agree in 97.5% of the cases in the same 200Kw set, and that 98.4%
of those cases are correctly tagged, we get a new corpus of 195Kw with an error
rate of 1.6%. If we add the manually tagged 70Kw we get a 265Kw corpus with an
1.2% error rate, which is significantly lower than 1.9%.
The main results obtained with this approach are summarized below: Starting
with the manually tagged training corpus, the best tagger combination achieved an
accuracy of 93.1% on ambiguous words and 97.4% overall. After one bootstrapping
A MACHINE LEARNING APPROACH TO POS TAGGING 29
iteration, using the coincidence cases in a fresh set of 800 Kw, the accuracy was
increased up to 94.2% for ambiguous words and 97.8% overall. It is important to
note that this improvement is statistically significant and that it has been achieved
in a completely automatic re-estimation process. In our domain, further iterations
did not result in new significant improvements.
For a more detailed description we refer the reader to (M'arquez et al. 1998),
where experiments using different sizes for the retraining corpus are reported, as
well as different combination techniques, such as weighted interpolation and/or
previous hand checking of the tagger disagreement cases.
From the aforementioned results we emphasize the following conclusions:
ffl A 70 Kw manually-disambiguated training set provides enough evidence to allow
our taggers to get fairly good results. In absolute terms, results obtained
with the LexEsp Spanish corpus are better than those obtained for wsj English
corpus. One of the reasons contributing to this fact may be the less noisy training
corpus. However it should be investigated if the part of speech ambiguity
cases for Spanish are simpler on average.
ffl The combination of two (or more) taggers seems to be useful to:
- Obtain larger training corpora with a reduced error rate, which enable the
learning procedures to build more accurate taggers.
- Building a tagger which proposes a single tag when both taggers coincide
and two tags when they disagree. Depending on user needs, it might be
worthwhile to accept a higher remaining ambiguity in favour of a higher
recall. With the models acquired from the best training corpus, we get a
tagger with a recall of 98.3% and a remaining ambiguity of 1.009 tags/word,
that is, 99.1% of the words are fully disambiguated and the remaining 0.9%
keep only two tags.
6. Conclusions
In this work we have presented and evaluated a machine-learning based algorithm
for obtaining statistical language models oriented to pos tagging.
We have directly applied the acquired models in a simple and fast tree-based
tagger obtaining fairly good results. We also have combined the models with n-gram
statistics in a flexible relaxation-labelling based tagger. Reported figures
show that both models properly collaborate in order to improve the results.
Both model learning and testing have been performed on the wsj corpus of English
Comparison between the results obtained using large training corpora (see section
4.2) with those obtained with fairly small training sets (see section 5) points
out that the best policy in both cases is the combination of the learned tree-based
model with the best n-gram model.
When using large training corpora, the reported accuracy (97.36%) is, if not
better, at least as good as that of a number of current non-linguistic based taggers
ARQUEZ, LLU' iS PADR '
O AND HORACIO RODR' iGUEZ
-see (M'arquez & Padr'o 1997) for further details-. When using small training
corpora, a promising 96.12% was obtained for English.
Deeper application of the techniques, together with the collaboration of both
taggers in a voting approach was used to develop from scratch a 5.5Mw annotated
corpus (LexEsp) with an estimated accuracy of 97.8%. This result confirms
the validity of the the proposed method and shows that a very high accuracy
is possible for Spanish tagging with a relatively low manual effort. More details
about this issue can be found in (M'arquez et al. 1998).
However, further work is still to be done in several directions. Referring to the
language model learning algorithm, we are interested in testing more informed
attribute selection functions, considering more complex questions in the nodes and
finding a good smoothing procedure for dealing with very small ambiguity classes.
See (M'arquez & Rodr'iguez 1997) for a first approach.
In reference to the information that this algorithm uses, we would like to explore
the inclusion of more morphological and semantic information, as well as more
complex context features, such as non-limited distance or barrier rules in the style
of (Samuelsson et al. 1996).
We are also specially interested in extending the experiments involving combinations
of more than two taggers in a double direction: first, to obtain less noisy
corpora for the retraining steps in bootstrapping processes; and second, to construct
ensembles of classifiers to increase global tagging accuracy. We plan to apply these
techniques to develop taggers and annotated corpora for the Catalan language in
the near future.
We conclude by saying that we have carried out first attempts (Padr'o 1998) in
using the same techniques to tackle another classification problem in the nlp area,
namely Word Sense Disambiguation (wsd). We believe, as other authors do, that
we can take advantage of treating both problems jointly.
Acknowledgments
This research has been partially funded by the Spanish Research Department (CI-
CYT's ITEM project TIC96-1243-C03-02), by the EU Commission (EuroWordNet
and by the Catalan Research Department (CIRIT's quality research group
1995SGR 00566).
A MACHINE LEARNING APPROACH TO POS TAGGING 31
Appendix
A
We list below a description of the Penn Treebank tag set, used for tagging the wsj
corpus. For a complete description of the corpus see (Marcus et al. 1993).
CC Coordinating conjunction
CD Cardinal number
DT Determiner
EX Existential there
FW Foreign word
IN Preposition
JJ Adjective
JJR Adjective, comparative
JJS Adjective, superlative
LS List item marker
MD Modal
NN Noun, singular
NNP Proper noun, singular
NNS Noun, plural
NNPS Proper noun, plural
POS Possessive ending
PRP Personal pronoun
Possessive pronoun
RB Adverb
RBR Adverb, comparative
RBS Adverb, superlative
RP Particle
TO to
UH Interjection
VB Verb, base form
VBD Verb, past tense
VBN Verb, past participle
VBP Verb, non-3rd ps. sing.
present
VBZ Verb, 3rd ps. sing.
present
WDT wh-determiner
WP wh-pronoun
WP$ Possessive wh-pronoun
WRB wh-adverb
. End of sentence
, Comma
" Straight double quote
' Left open single quote
" Left open double quote
' Right close single quote
" Right close double quote
Notes
1. The size of tag sets differ greatly from one domain to another. Depending on the contents,
complexity and level of annotation they move from 30-40 to several hundred different tags. Of
course, these differences have important effects in the performance rates reported by different
systems and imply difficulties when comparing them. See (Krenn & Samuelsson 1996) for a
more detailed discussion on this issue.
2. Nevertheless, recent studies on tagger evaluation and comparison (Padr'o and M'arquez 1998)
show that the noise in test corpora -as is the case of wsj- may significantly distort the
evaluation and comparison of tagger accuracies, and may invalidate even an improvement such
as the one reported here when the test conditions of both taggers are not exactly the same.
--R
Boltzmann machines and their applications.
An inequality and associated maximization technique in statistical estimation for probabilistic functions of a Markov process.
Classification and Regression Trees.
Unsupervised Learning of Disambiguation Rules for Part-of-speech Tagging
Domain Specific Knowledge Acquisition for Conceptual Sentence Analysis.
PhD Thesis
and Turmo J.
Tagging French - comparing a statistical and a constraint-based method
A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text.
Elements of Information Theory.
A Practical Part-of-Speech Tagger
Grammatical Category Disambiguation by Statistical Optimization.
MTB: A Memory-Based Part-of-Speech Tagger Generator
The Computational Analysis of English.
Automatic Grammatical Tagging of English.
Constraint Grammar.
Estimating Attributes: Analysis and Extensions of RELIEF.
The Linguist's Guide to Statistics.
Constraint Satisfaction as Global Optimization.
An Optimization-based Heuristic for Maximal Constraint Satisfaction
An optimization approach to relaxation labelling algorithms.
Learning Grammatical Structure Using Statistical Decision-Trees
Building a Large Annotated Corpus of English: The Penn Treebank.
Towards Learning a Constraint Grammar from Annotated Corpora Using Decision Trees.
A Flexible POS Tagger Using an Automatically Acquired Language Model.
Some Experiments on the Automatic Acquisition of a Language Model for POS Tagging Using Decision Trees.
Using Decision Trees for Coreference Resolution.
Tagging English Text with a Probabilistic Model.
Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning.
Corpus Linguistic and the automatic analysis of English.
A Hybrid Environment for Syntax-Semantic Tagging
Llenguatges i Sistemes Inform'atics
On the Evaluation and Comparison of Taggers: the Effect of Noise in Testing Corpora.
Learning Compatibility Coefficients for Relaxation Labeling Processes.
Using Simulated Annealing to Train Relaxation Labelling Processes.
Induction of Decision Trees.
A Simple Introduction to Maximum Entropy Models for Natural Language Processing.
On the accuracy of pixel relaxation labelling.
Models.
Maximum Entropy Modeling for Natural Language.
Scene labelling by relaxation operations.
Adaptive Statistical Language Modeling: A Maximum Entropy Approach.
PhD Thesis.
Inducing Constraint Grammars.
Comparing a Linguistic and a Stochastic Tagger.
Aggregate and mixed-order Markov models for statistical language processing
Probabilistic Part-of-Speech Tagging Using Decision Trees
Relaxation Methods in Engineering Science.
Relaxation and Neural Learning: Points of Convergence and Divergence.
Journal of Parallel and Distributed Computing 6
Three Studies of Grammar-Based Surface Parsing on Unrestricted English Text
Developing a Hybrid NP Parser.
Understanding line drawings of scenes with shadows: Psychology of Computer Vision.
Combining Independent Knowledge Sources for Word Sense Disambiguation.
A Statistical-Heuristic Feature Selection Criterion for Decision Tree Induction
Contributing Authors Llu'is M'arquez
--TR
--CTR
Wenjie Li , Kam-Fai Wong , Guihong Cao , Chunfa Yuan, Applying machine learning to Chinese temporal relation resolution, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, p.582-es, July 21-26, 2004, Barcelona, Spain
Ferran Pla , Antonio Molina, Improving part-of-speech tagging using lexicalized HMMs, Natural Language Engineering, v.10 n.2, p.167-189, June 2004 | decision trees induction;part of speech tagging;constraint satisfaction;corpus-based and statistical language modeling;relaxation labeling |
345313 | Automated generation of agent behaviour from formal models of interaction. | We illustrate how a formal model of interaction can be employed to generate documentation on how to use an application, in the form of an Animated Agent. The formal model is XDM, an extension of Coloured Petri Nets that enables representing user-adapted interfaces, simulating their behaviour and making pre-empirical usability evaluations. XDM-Agent is a personality-rich animated character that uses this formal model to illustrate the role of interface objects and to explain how tasks may be performed; its behaviour is programmed by a schema-based planning followed by a surface generation, in which verbal and non-verbal acts are combined appropriately, the agent's 'personality' may be adapted to the user characteristics. | INTRODUCTION
The increasing complexity of user interfaces requires specific
methods and tools to design and describe them; an emerging
solution is to employ formal methods for a precise and
unambiguous specification of interaction. Graphical methods are
preferred since they are more easily perceived by users without a
particular experience [6, 14, 28]. However, formal methods
require a considerable effort in building the model; this is,
probably, the main reason why even those of them that proved to
be valid in HCI research find difficulties in being applied in
interface engineering.
There are, currently, two directions in which research tries to
overcome this problem issue. The first one aims at developing
tools that simplify the modelling process by integrating formal
methods with artificial intelligence techniques: the model is
augmented with a knowledge base that represents HCI design
guidelines, to generate semiautomatically the interface prototype:
see, for instance, the MECANO [22] and MOBI-D [29] projects.
The second direction of research is focused on proving that
efforts spent in building a formal model can be partially got back
if that model is employed to ease some of the designer's tasks,
such as early prototyping, automated generation of interface
objects and help messages and pre-empirical interface. Examples
of projects that follow this trend are TADEUS [13] and TLIM
[29]. A third perspective sees using knowledge in the formal
model for generating software documentation. Documentation
may refer to several aspects of software and may be addressed to
several types of users. It may be aimed at reconstructing "logic,
structure and goals that were used in writing a program in order
to understand what the program does and how it does it", as in
MediaDoc [12]; in this case, software engineers are the main
users of the documentation produced. Alternatively, it may be
aimed at describing how a given application can be used. In this
case, the addressees of documentation are the end users of the
application, whose need for information varies according to the
tasks they perform and to their frequency of use and experience
with of the application.
The idea of producing a User Manual as a byproduct of interface
design and implementation becomes more practicable if a formal
model of interaction is employed as a knowledge base to the two
purposes. The most popular formal models and tools that had
originally been proposed to guide interface design and
implementation have been employed to automatically produce
help messages: for instance, Petri Nets [25] and HUMANOID
Hyper Help [22]; in these systems, help messages are presented
as texts or hypertexts, in a separate window. To complete the
software documentation, animations have been proposed as well
(for example, in UIDE), that combine audio, video and
demonstrations to help the user to learn how to perform a task
[32]. Other projects focused on the idea of generating, from a
knowledge base, the main components of an instruction manual
[34, 24]: for instance, DRAFTER and ISOLDE's aim is to
generate multilingual manuals from a unique knowledge base
[27, 31, 17]. Some of these Projects start from an analysis of the
manuals of some well-known software products, to examine the
types of information they include and the linguistic structure of
each of them [15]. By adopting the metaphor of 'emulating the
ideal of having an expert on hand to answer questions', I-Doc (an
Intelligent Documentation production system) analyses the
interactions occurring during expert consultations, to categorize
the users' requests and to identify the strategies they employ for
finding the answer to their question issues [18]. This study
confirms that the questions users request are a function not only
of their tasks, but also of their levels of experience: more system-oriented
questions are asked by novices while experts tend to ask
goal-oriented, more complex questions.
With the recent spread of research on animated characters, the
idea of emulating, in a User Manual, the interaction with an
expert, has a natural concretisation in implementing such a
manual in the form of an Animated Agent. The most notable
examples of Pedagogical Agents are Steve, Adele, Herman the
Bug, Cosmo and PPP-Persona, all aimed at some form of
intelligent assistance, be it presentation, navigation on the Web,
tutoring or alike [19, 2, 21, 30, 4]. Some of these Agents
combine explanation capabilities with the ability to provide a
demonstration of the product, on request.
In previous papers, we proposed a formalism named XDM
(Context-Sensitive Dialogue Modeling), in which Coloured Petri
Nets are extended to specify user-adapted interaction modeling;
we then described a tool for building XDM models and
simulating the interface behavior, and we enhanced this tool with
ability to perform pre-empirical evaluations of the interface
correctness and usability [8]. We subsequently investigated how
these models could be used as a knowledge source for
generating on-line user manuals in the form of hypermedia or of
an 'Animated Pedagogical Agent' that we called 'XDM-Agent': in
this paper, we present the first results of this ongoing Project. In
the following sections, after justifying why we selected an
Animated Agent as a presentation tool, we describe the XDM-
Agent's main components; we then discuss limits and interests of
this approach and conclude with some comparison with related
works.
2. THE ANIMATED AGENT APPROACH
As a first step of our research on software documentation, we
studied how XDM-Models could be employed to generate
various types of hypertextual help, such as: 'which task may I
perform?', 'how may I perform this task? 'why is that interaction
object inactive?', and others; we employed schema and ATN-
based natural language generation techniques to produce the
answers. Shifting from hypertextual helps to an Agent-based
manual entails several advantages and raises several
methodological problems. The main opportunity offered by
Animated Agents is to see the software documentation as the
result of a 'conversation with some expert in the field'. In
messages delivered by Animated Agents, verbal and non-verbal
expressions are combined appropriately to communicate
information: this enables the documentation designer to select
the media that is most convenient to vehicle every piece of
information. As several media (speech, body gesture, face
expression and text) may be presented at the same time,
information may be distributed among the media and some
aspects of the message may be reinforced by employing different
media to express the same thing, to make sure that the user
remembers and understands it. A typical example is deixis: when
helps are provided in textual form, indicating unambiguously the
interface object to which a particular explanation refers is not
easy; an animated agent that can overlap to the application
window may solve this problem by moving towards the object,
pointing at it, looking at it and referring to it by speech and/or
text. A second, main advantage, is in the possibility of
demonstrating the system behavior (after a 'How-to' question) by
mimicking the actions the user should do to perform the task and
by showing the effects these actions will produce on the
interface: here, again, gestures may reinforce natural language
expressions. A third advantage is in the possibility of making
visible, in the Agent's attitude, the particular phase of dialog: by
expressing 'give turn', 'take turn', 'listening', 'agreeing', 'doubt' and
other meta-conversational goals, the Agent may give the users
the impression that they are never left alone in their interaction
with the documentation system, that this system really listens to
them, that it shows whether their question was or wasn't clear,.
and so on.
However, shifting from a hypermedia to an agent-based user
manual corresponds to a change of the interaction metaphor that
implies revising the generation method employed. In hypermedia,
the main problems where to decide 'which information to
introduce in every hypermedia node', 'which links to further
explanations to introduce', 'which media combination to employ
to vehicle a specific message', in every context and for every
user. In agent-based presentations, the 'social relationship
metaphor' employed requires reconsidering the same problems in
different terms; it has, then, to be established 'which is the
appropriate agent's behaviour' (again, in every context and for
every user), 'how can the agent engage the users in a believable
conversation' by providing, at every interaction turn, the 'really
needed' level of help to each of them, 'how should interruptions
be handled and user actions and behaviours be interpreted' so as
to create the impression of interacting online with a tool that
shares some of the characteristics of a human helper. These
problems are common to all Animated Agents: in our project, we
have examined how they may be solved in the particular case of
software documentation.
3. XDM-AGENT
XDM-Agent is a personality-rich animated character that uses
several knowledge sources to explain to the user how to use the
application. The behaviour of this agent is programmed by a
schema-based planning (the agent's `Mind'), followed by a
surface generation (its 'Body'), in which verbal and nonverbal
acts are combined appropriately. The agent's personality, that is
the way its Mind is programmed and its Body appears to the
user, is adapted to the user characteristics. XDM-Agent exploits
three knowledge sources: (i) a formal description of the
application interface, (ii) a description of the strategies that may
be employed in generating the explanation and (iii) a description
of mental models of the two agents participating in the
interaction (the User and XDM-Agent). Let us describe in more
detail these sources, and how they are used for generating the
agent behaviour.
3.1 The Interface Description Formalism
XDM is a formalism that extends Coloured Petri Nets (CPN:
[35]) to describe user-adapted interfaces. A XDM model includes
the following components:
- a description by abstraction levels of how tasks may be
decomposed into complex and/or elementary subtasks (with
Petri Nets), with the relations among them;
- a description of the way elementary tasks may be performed,
with a logical and physical projection of CPN's transitions;
this description consists in a set of tables that specify the task
associated with every transition, the action the user should
make to perform it and the interaction object concerned;
- a description of the display status before and after every task
is performed, with a 'logical and physical projection' of
CPN's places; this consists in a set of tables that specify
information associated with every place and display layout in
every phase of interaction.
To model user-adaptation, conditions are attached to transitions
and to places, to describe when and how a task may be
performed and how information displayed varies in every
category of users. This allows the designer to restrict access to
particular tasks to particular categories of users and to vary the
way in which tasks may be performed and the display appears. A
detailed description of this formalism may be found in [8], where
we show how we used it in different projects to design and
simulate a system interface and to make semiautomatic
evaluations of consistency and complexity.
In the generation of the Animated User Manual, we employ a
simplified version of XDM, in which Petri Nets (PNs) are
replaced with UANs (User Action Notations: [25]). Like PNs,
UANs describe tasks at different levels of abstraction: a UAN
element represents a task; temporal relations among tasks are
specified in terms of a few 'basic constructs':
sequence (A- B), iteration (A)*, choice | B), order independence
concurrency
interleavability
These constructs may be combined to describe the decomposition
of a task T as a string in the alphabet that includes UAN
elements and relation operators.
For example:
indicates that subtasks B and C are in alternative, that A has to
be performed before them, that the combination of tasks A, B
and C may be iterated several times and that, finally, the task T
must be concluded by the subtask D. Notice that subtasks A, B,
C and D may be either elementary or complex; at the next
abstraction level, every complex task will be described by a new
UAN.
A UAN then provides a linearised, string-based description of
the task relationships that are represented graphically in a Petri
Net. Elements of UANs correspond to PNs' transitions; their
logical and physical projections describe how tasks may be
performed; conditions attached to UANs' elements enable
defining access rights.
Figure
1 shows a representation of the knowledge base that
describes a generic application's interface, as an Entity-Relationship
diagram. UANs' elements, Tasks (either
complex or elementary), Interface Objects (again, complex or
elementary) and Events are the main entities. Elementary
interface objects in a window may be grouped into complex
objects (a toolbar, a subwindow etc). A Task is associated with a
(complex or elementary) Interface Object; a UAN describes how
a complex task may be decomposed into subtasks and the
relations among them; an elementary task may be performed by a
specific Event on a specific elementary Interface Object; an
elementary Interface Object may open a new window that
enables performing a new complex task. Adaptivity is
represented, in the E-R diagram, through a set of user-related
conditions attached to the entities or to the relations (we omit
these conditions from Figure 1 and from the example that
follows, for simplicity reasons). A condition on a task defines
access rights to that task; a condition on an object defines the
user category to which that object is displayed; a condition on the
task-object-event relation defines how that task may be
performed, for that category of users, .and so on.
Figure
1. E-R representation of the application-KB.
Let us reconsider the previous example of UAN. Let Ti be a UAN
element and UAN(Ti) the string that describes how Ti may be
decomposed into subtasks, in the UAN language. Let Task(Ti),
Obj(Ti) and Ev(Task(Ti), Obj(Ti)) be, respectively, the task
associated with Ti, the interface object and the event that enable
the user to perform this task. The application-KB will include, in
this example, the following items:
UANs' elements: T, A, B, C, D
Complex Tasks:Task(T): database management functions
Task (A): input identification data
Elementary Tasks: Task (B): delete record, Task (C): update
record, Task (D): exit from the task
Interface Objects: Obj (T): W1, Obj (A): W2, Obj (B): B1. Obj
Events: Ev(Task(D), Obj (D)): double-click
This model denotes that database management functions (that
can be performed in window W1) need first to input identification
data (with window W2), followed by deleting or updating a record
(buttons B1 and B2); this combination of tasks may be repeated
several times. One may, finally, exit from this task by double-clicking
on button B3.
3.2 XDM-Agent's behaviour strategies
XDM-Agent illustrates the graphical interface of a given
application starting from its main window or from the window
that is displayed when the user requests the agent's help. The
generation of an explanation is the result of a three-step process:
a planning phase that establish the presentation content; a plan
revision phase that produces a less redundant plan; a realisation
phase, that translates the plan into a presentation. The
hierarchical planning algorithm establishes how the agent will
describe the main elements related to the window by reading its
description in the application-KB: a given communicative goal,
fired from the user request, is recursively decomposed into
subgoals until 'primitive goals' are reached, that do not admit
further decomposition. This process generates a tree structure
whose leaves represent macro-behaviours that can be directly
executed by the agent. At the planning level, adaptation is made
by introducing personality-related conditions in the constraint
field of plan operators, so that the same communicative goal may
produce different decompositions in the different contexts in
which the agent will operate. The presentation plan includes,
most of the times, redundancies due to the fact that objects in the
interface may be of the same type and tasks associated with them
can be activated with the same interaction technique; an
aggregation algorithm synthesizes common elements to produce a
less repetitive presentation. Once the revised presentation plan
for a window is ready, this is given as input to a realisation
algorithm, that transforms it in a sequence of 'macro-behaviors'
(a combination of verbal and nonverbal communicative acts that
enables achieving a primitive goal in the plan).
The list of macro-behaviors that XDM-Agent agent is able to
perform is application-independent but domain-dependent; it is
the same for any application to be documented but is tailored to
the documentation task:
a. perform meta-conversational acts: introduce itself, leave,
take turn, give turn, make questions, wait for an answer;
b. introduce-a-window by explaining its role and its
components;
c. describe-an-object by showing it and mentioning its type and
caption (icon, toolbar or other);
d. explain-a-task by mentioning its name;
e. enable-performing-an-elementary-task by describing the
associated event;
f. demonstrate-a-task by showing an example of how the task
may be performed;
g. describe-a-task-decomposition by illustrating the
relationships among its subtasks.
A macro-behaviour is obtained by combining verbal and non-verbal
acts as follows:
verbal acts are produced with natural language generation
functions, that fill context-dependent templates with values
from the application-KB; the produced texts are subsequently
transformed into 'speech' or `write a text in a balloon';
nonverbal acts are 'micro behaviours' that are produced from
MS-Agent's animations, with the aid of a set of auxiliary
functions. The list of micro behaviours that are employed in
our animated user manual is shown in Figure 2.
a. Greetings
Introduction
, Leave
b. Meta-Conversational-Gestures
Take_Turn, Give_Turn, Questioning, Listening
c. Locomotive-Gestures
Move_To_Object (Oi), Move_To_Location (x, y)
d. Deictic-Gestures
Point_At_Location (x,y), Point_At_Object (Oi)
e. Relation-Evoking
Evoke_Sequence, Evoke_Iteration,
Evoke_Order_Independence, .
f. Event-Mimicking
Mimic_Click,Mimic_Double_Click, Mimic_Keyboard_Entry,.
g. Looking
Look_At_User, Look_At_Location (x,y), Look_At_Object (Oi),
Look_At_Area ((xi, yi), (xj, yj))
h. Approaching-the-user
Figure
2. Library of XDM-Agent's `micro behaviours'
This list includes: (i) object-referring gestures: the agent may
move towards an interface object or location, point at it and look
at it; (ii) iconic gestures: the agent may evoke the relationship
among subtasks, that is a sequence, a iteration, a choice, an order
independence, a concurrency and so on; it can mimic, as well, the
actions the user should do to perform some tasks: click, double
click, keyboard entry,.and so on; (iv) user-directed gestures:
the agent may look at the user, get closer to him or her by
increasing its dimension, show a questioning or listening
attitude, manifest its intention to give or take the turn, open and
close the conversation with the user by introducing itself or
saying goodbye. The way these micro behaviors are implemented
depends on the animations included in the employed software: in
particular, a limited overlapping of gestures can be made in MS-
Agent 1 , which only enables generating speech and text at the
same time. We therefore overlap verbal acts to nonverbal ones so
that the Agent can simultaneously move, speech and write
something on a balloon, while we sequence nonverbal acts so as
to produce 'natural' behaviors. We employ nonverbal acts to
reinforce the message vehicled by verbal acts. So, the agent's
speech corresponds to a self-standing explanation, that might be
translated into a written manual; text balloons mention only the
'key' words in the speech, on which users should focus their
attention; gestures support the communication tasks that could
not be effectively achieved with speech (for instance, deixis),
reinforce concepts that users should not forget (for instance, task
relations), support the description of the way actions should be
done (for instance, by mimicking events) and, finally, give the
users a constant idea of 'where they are, in the interactive
explanation process (for instance, by taking a 'listening' or
'questioning' expression). Finally, like for all embodied
characters, speech and gestures are employed, in general, to
make interaction with the agent more 'pleasant' and to give users
the illusion of 'interacting with a companion' rather than
'manipulating a tool'.
3.3 The Mental Models
The two agents involved in interaction are the User and XDM-
Agent. While the user modeling component is very simple (we
classify the users according to their experience with the
application), XDM-Agent's model is more interesting, since its
behaviour is driven by some personality traits that we describe in
terms of its 'helping attitude'. Let's see this in more detail.
A typical software manual includes three sections [15]: (i) a
tutorial, with exercises for new users, (ii) a series of step by step
instructions for the major tasks to be accomplished and (iii) a
ready-reference summary of commands. To follow the 'minimal
manual' principle (``the smaller the manual, the better'': J M
Carroll, cited [1]), the Agent should start from one of these
components, provide the "really needed minimal" and give more
details only on the user's request. The component from which to
1 MS-Agent is a downloadable software component that displays
an animated character on top of an application window and
enables it to talk and recognise the user speech. A character
may be programmed by a language that includes a list of
'animations' (body and face gestures): these animations are the
building blocks of XDM-Agent.
start and the information to provide initially may be fixed and
general or may be varied according to the user and to the context.
In the second case, as we mentioned in the Introduction, the user
goals, his/her level of experience and his/her preference
concerning the interaction style may drive selection of the
Agent's ``explanation attitude''. Embodiment of the Agent may
be a resource to make this attitude explicit to the user, by varying
the Agent's appearance, gesture, sentence wording and so on.
XDM-Agent is able to apply two different approaches to
interface description; in the task-oriented approach, it
systematically instructs the user on the tasks the window enables
performing, how they may be performed and in which sequence
and provides, if required, a demonstration of how a complex task
may be performed. In command-oriented descriptions, it lists the
objects included in the window in the order in which they are
arranged in the display and provides a minimal description of the
task they allow to perform; other details are given only on the
User request. Therefore, in the first approach the Agent takes the
initiative and provides a detailed explanation, while in the
second one the initiative is 'mixed' (partly of the Agent, partly of
the User), explanations are, initially, less detailed and a dialog
with the User is established, to decide how to go on in the
explanation.
If the metaphor of 'social interaction' is applied to the User-Agent
relationship, the two approaches to explanation can be
seen as the manifestation of two different 'help personalities' in
the Agent [9]: (i) a overhelper, that tends to interpret the implicit
delegation received by the user in broad terms and explains
anything he presumes the user desires to know, and (ii) a literal
helper, that provides a minimal description of the concepts the
user explicitly asks to know. These help attitudes may be seen as
particular values of the dominant/submissive dimension of
interpersonal behaviour 2 , which is considered to be the most
important factor affecting human-computer interaction [23].
Although some authors proved that dominance may be
operationalised by only manipulating the phrasing of the texts
shown in the interface and the interaction order (again, [23]; but
also [10]), others claim that the user appreciation of the interface
personality may be enhanced by varying, as well, the Agent's
'external appearance': body posture, arm, head and hands
gestures, moving [16, 3, 5]. By drawing on the cited experiences,
we decided to embody the overhelping, dominant attitude in a
more 'extroverted' agent that employs a direct and confident
phrasing and gestures and moves much. We embody, on the
contrary, the literal helper, submissive attitude in a more
'introverted' agent, that employs lighter linguistic expressions
and moves and gestures less. To enhance matching of the
Agent's appearance with its underlying personality, we select
Genie to represent the more extroverted, dominant personality
and Robby to represent the submissive one]: this is due in part to
the way the two characters are designed and animated, in the
MS-Agent tool, and in part to the expectation they raise in the
2 According to [23], dominance is marked, in general, by a
behavior that is "self-confident, leading, self-assertive, strong
and take-charge"; submissiveness is marked, on the contrary,
by a behavior that is "self-doubting, weak, passive, following
and obedient".
user: Genie is seen as more 'empathetic', someone who takes
charge of the Users and anticipates their needs, while Robby is
seen as more 'formal', someone who is there only to respond to
orders. Figure 3 summarises the main differences between the
two characters.
Robby Genie
Object-oriented presentation Task-oriented presentation
submissive Dominant
introverted Extroverted
is rather 'passive'; says the
minimum and waits for the user's
orders
is very 'active':takes the initiative
and provides detailed explanations
employs 'light' linguistic
expressions, with indirect and
uncertain phrasing (suggestions)
employs 'strong' linguistic
expressions, with direct and
confident phrasing (commands)
gestures the minimum: minimum
locomotion, limited movements of
arms and body,
avoids getting close to the user
gestures are more 'expansive': more
locomotion, wider movements, gets
closer to the user
speaks slow speaks high
Figure
3. Personality traits of Robby and Genie, with
differences in behaviour.
How do we combine the Agent's personality with the User's
characteristics? Some authors claim that task-based explanations
would be more suited to novice users and object-oriented
explanations more suited to experts. For instance, empirical
analysis of a corpus of documents, in TAILOR, showed that
complex devices are described in an object-oriented way in adult
encyclop-dias, while descriptions in junior encyclop-dias tend
to be organized in a process-oriented, functional way [26]. On the
other side, a 'dominant', `extroverted' personality is probably
more suited to a novice, while a 'submissive' and `formal' one
will be more easily accepted by an expert user. This led us to
select Robby for experts and Genie for novices.
4. IMPLEMENTATION
We implemented XDM-Agent in Java and Visual Basic, under
Windows95. Adaptation of the generated documentation to the
agent's personality is made at both the planning and the
realisation phase. At the planning level, adaptation is made by
introducing personality-related conditions in the planning
schemas in order to generate a task-oriented or an object-oriented
plan. At the surface realisation level, a unique
Behaviour Library is employed in the two cases, with different
ways of realising every behaviour. In the task-oriented plan, a
window is introduced by mentioning the complex task that this
window enables performing; the way this complex task is
decomposed into less complex subtasks is then described, by
examining (in the application-KB) the UAN associated with this
window. For each element of the UAN, the task and the
associated object are illustrated; if the task is 'elementary', the
event enabling to perform it is mentioned; if it is is complex, the
user is informed that a demonstration of how to perform it may
be provided, if requested. Task relations (again, from the UAN)
are then illustrated. Description goes on by selecting the next
window to describe, as one of those that can be opened from the
present window. In the object-oriented plan, interface objects are
described by exploring their hierarchy in a top-down way; for
each elementary object, the associated task is mentioned. The
turn is then given to the user, who may indicate whether and how
to proceed in the explanation.
Planning is totally separated from realisation: we employ, to
denote this feature, the metaphor of 'separating XDM-Agent's
Mind from its Body'; we might associate, in principle, Robby's
appearance and behaviour to a task-oriented plan (that is,
Robby's Body to Genie's Mind) and the inverse. This separation
of the agent's body from its mind gives us the opportunity to
implement the two components on the server-side and on the
client-side respectively, and to select the agent's appearance that
is preferable in any given circumstance. An example of the
difference in the behaviour of the two agent's personalities is
shown in Figure 4, a and b.
Macro-
Behaviour
Speech Balloon Gesture
window
(task-oriented)
In this window, you can
perform the main
database management
functions:
LookAtUser
Describe-object
- to select a database
management function,
use the commands in this
Select
database
managem.
functions
MoveToObject
LookAtUser
idem - to input identification
data, use the textfields in
this subwindow;
Input
on data
idem
complex-task
There are two database
management functions
may select:
Describe-object
Enable-to-
perform-a-task
- to update an existing
record, click on the
'Update' button;
Update MoveToObject
PointAtObject
LookAtObject
LookAtUser
MimicClick
idem - to delete a record,
click on the 'Delete'
button;
Delete idem
Figure
task-oriented description, by Genie
Macro-
Behaviour
Speech Balloon Gesture
window
(object-oriented)
I'm ready to illustrate
you the objects in this
window:
LookAtUser
complex-object
- a toolbar, with 5
buttons:
Describe-object
the first one enables
you to update an
existing record
Update PointAtObject
LookAtObject
idem the second one enables
you to deletea record
Delete idem
Figure 4b: object-oriented description, by Robby
5. INTERESTS AND LIMITS
The first positive result of this research is that we could verify
that the formal model we employed to design and evaluate the
interface (XDM) enables us, as well, to generate the basic
components of an Animated Instruction Manual. Planning the
structure of the presentation directly from this formal model
contributes to insure that "the manual reflects accurately the
system's program and that it can be viewed as a set of pre-planned
instructions"[1]. An Animated User Manual such as the
one we generate is probably suited to the needs of 'novice' users;
we are not sure, on the contrary, that the more complex questions
an expert makes can be handled efficiently with our application-
KB. We plan to check these problems in an evaluation study,
from which we expect some hints on how to refine our system. In
this study, we plan to assess which is the best matching between
the two XDM-Agent's personalities and the users characteristics,
including their experience and their personality: in fact, the
evidence on whether complementarity or similarity-attraction
holds between the system and the user personalities is rather
controversial [16], and we suspect that the decision depends on
the particular personality traits considered.
The present prototype has several limits: some of them are due to
the generation method we employ, others to the tool:
- The main limit in our generation systems is that we do not
handle interruptions: user can make questions to the system
only when the agent gives them the turn;
- the second limit originates from available animations. In MS-
Agent characters, the repertoire of gestures is rather limited,
especially considering face and gaze expressions. In addition,
the difficulty of overlapping animations does not allow us to
translate into face or body gestures the higher parts of the
discourse plan; XDM-Agent thus lacks of those gestures that
aid in integrating adjacent discourse spans into higher order
groupings [20], for instance by expressing rhetorical relations
among high-level portions of the plan. To overcome these
limits, we should build our own character, with the
mentioned animations.
We have, still, to assess whether Animated Agents really
contribute to make software documentation more usable, in
which conditions and for which user categories. This
consideration applies to the majority of research projects on
Animated Agents, which has been driven, so far, by an optimistic
attitude rather than a careful assessment of the validity of results
obtained.
5. RELATED WORK
Our research lies in the crossroad of several areas: formal models
of HCI, user adaptation and believable agents. There are, we
believe, some new ideas in the way these areas are integrated
into XDM-Agent: we showed, in previous papers, that a unique
formal model of HCI can be employed to unify several steps of
the interface design and implementation process: after analysis of
user requirements has been completed, these requirements can
be transferred into a UI specification model that can be
subsequently employed to implement the interface, to simulate
its behavior in several contexts and to make pre-empirical
usability evaluations. In this paper, we show that the same model
can be employed, as well, to produce an online user manual. Any
change in the UI design must be transfered into a change in the
formal model and automatically produces a new version of the
interface, of the simulation of its behavior, of usability measures
and of the documentation produced.
Adaptation to the user and to the context is represented through
parameters in the model and reflects into a user-adapted
documentation. In particular, adaptation to the user needs about
documentation is performed through the metaphor of 'changing
the personality of the character who guides the user in examining
the application'. That computer interfaces have a personality was
already proved by Nass and colleagues, in their famous studies in
which they applied to computers theories and methods originally
developed in the psychological literature for human beings (see,
[23]). Taylor and colleagues' experiment demonstrates that a
number of personality traits (in the 'Five-Factor' model) can be
effectively portrayed using either voice alone or in combination
with appropriately designed animated characters" [33]. Results
of the studies by these groups oriented us in the definition of the
verbal style and the nonverbal behaviors that characterise XDM-
Agent's personalities (see, [16]); Walker and colleagues research
on factors affecting the linguistic style guided us in the
diversification of the speech attitudes [36]. We examined, in the
past, the personality traits that might be relevant in HCI by
formalising the cooperation levels and types and the way they
may combine [7, 9]: Robby and Genie are programmed according
to some of these traits; other traits (such as a critical helper, a
supplier and so on) might be attributed to different characters,
with different behaviors. The long-term goal of our research is to
envisage a human-computer interface in which the users can
settle, either implicitly or explicitly, the 'helping attitude' they
need in every application: XDM-Agent is a first step towards this
direction.
6.
--R
Social impacts of computing: Codes of professional ethics.
Mamdani and Fehin
Lifelike computer characters: the Persona project ar Microsoft.
Emotion and personality in a conversational character.
ADV Charts: a visual formalism for interactive systems.
Personality traits and social attitudes in multiagent cooperation.
Formal description and evaluation of user-adapted interfaces
How can personality factors contribute to make agents more 'believable'?
Modelling and Generation of Graphical User Interfaces in the TADEUS approach
Generation of knowledge-acquisition tools from domain ontologies
Statecharts: a visual formalism for complex systems.
Two sources of control over the generation of software instructions.
Personality in conversational characters: building better digital interaction partners using knowledge about human personality preferences and perceptions.
Isolde: http://www.
Agents. http://www.
Some relationships between body motion and speech.
Deictic believability: coordinated gesture
Automatic generation of help from interface design models.
Can computer personalities be human personalities?
Applying the Act-Function-Phase Model to Aviation Documentation
Validating interactive system design through the verification of formal task and system models.
The use of explicit User Models in a generation system for tailoring answers to the user's level of expertise.
DRAFTER: an interactive support tool for writing multilingual instructions.
Developing Adaptable Hypermedia.
Towards a General Computational Framework for Model-Based Interface Development Systems
Computer support for authoring multilingual software documentation.
Automatic generation of textual
Providing animated characters with designated personality profiles.
From Logic to manuals.
Extending Petri Nets for specifying man-machine interaction
Improvising linguistic style: social and affective bases for agent personality.
--TR | user manuals generation;formal models of interaction;animated agents |
345541 | A novel method for the evaluation of Boolean query effectiveness across a wide operational range. | Traditional methods for the system-oriented evaluation of Boolean IR system suffer from validity and reliability problems. Laboratory-based research neglects the searcher and studies suboptimal queries. Research on operational systems fails to make a distinction between searcher performance and system performance. This approach is neither capable of measuring performance at standard points of operation (e.g. across R0.0-R1.0). A new laboratory-based evaluation method for Boolean IR systems is proposed. It is based on a controlled formulation of inclusive query plans, on an automatic conversion of query plans into elementary queries, and on combining elementary queries into optimal queries at standard points of operation. Major results of a large case experiment are reported. The validity, reliability, and efficiency of the method are considered in the light of empirical and analytical test data. | Figure
1. An example of a high recall Oriented
query used by Harter [10] to illustrate the facet
based query planning approach.
(information retrieval OR
online systems OR
AND
trial(1w)error
expert systems OR
artificial intelligence OR
behavior?/DE,ID,TI
fundamental validity problem. Queries exploit the Boolean IR
model in a suboptimal way.
1.2 Harter's Idea: the Most Rational Path
Harter [10] introduced an idea for an evaluation method based
on the notion of elementary queries (EQ).1 Harter used a single
search topic to illustrate how the method could be applied. He
designed a high recall oriented query plan (see Fig 1). Harter
applied the building block search strategy which quite
commonly used by professional searchers [6, 9, 12, 16].
The major steps of the building blocks strategy are 1) Identify
major facets and their logical relationships with one another.
query terms that represent each facet: words, phrases,
etc. Combine the query terms of a facet by disjunction (OR
operation). Combine the facets by conjunction or negation
(AND or ANDNOT operation) [9].
The notion of facet is important in query planning. It is a
concept that is identified from, and defines one exclusive aspect
of a search topic. In step 2, a typical goal is to discover all
plausible query terms appropriate in representing the selected
facet.
Next, Harter retrieved all documents matching the conjunction
of facets A and B represented by the disjunction of all selected
query terms, and assessed the relevance of resulting 371
documents. In addition, all conjunctions of two query terms
(called elementary queries) from the query plan representing
facets A and B in Fig. 1 were composed and executed. A sample
from the 24 elementary queries and the summary of their
retrieval results are presented in Table 1.
Harter [10] demonstrated the procedure of constructing optimal
queries (called the most rational path). An estimate for
maximum precision across the whole relative recall range was
determined by applying a simple incremental algorithm:
1. To create the initial optimal query, choose the EQ that
achieves the highest precision.
Actually Harter talked about elementary postings sets. This is very
confusing since it applies set-based terminology to address queries as
logical statements.
Elementary queries
# of
Docs
# of Rel
Docs
cision
Recall
s22
information retrieval
AND tactic?
information retrieval
AND heuristic?
information retrieval
AND trial(1w)error
behavior?/DE,ID,TI
cognitive/de
Table
1. Retrieval results for the 24 elementary queries
in the case search by Harter (1990).
Precision
0,00
s3 or s18
or s18 or s24
Recall
Most rational path Elementary queries
Figure
2. Recall and precision of the 24 elementary queries and
the most rational path in the case search presented by Harter [10].
2. Create in turn the disjunction of each of the remaining EQs
with the current optimal query. Select the disjunction with
the EQ that maximizes precision. The disjunction of the
current optimal query and the selected EQ creates a new
optimal query.
3. Repeat step 2 until all elementary queries have been
exhausted.
Precision and recall values for the 24 elementary queries and
the respective curve for the optimal queries are presented in Fig
2.
Harter never reported full-scale evaluation results based on the
idea of the most rational path except this single example. He did
neither develop operational guidelines for a fluent use of the
method in practice.
1.3 Research Goals
The main goal of the study was to create an evaluation method
for measuring performance of Boolean queries across a wide
operational range by elaborating the ideas introduced by Harter
[10]. The method is presented and argued using the framework
suggested by Newell M={domain, procedure, justification}
[19]:
1. The domain of the method specifies the appropriate
application area for the method.
2. The procedure of the method consists of the ordered set of
operations required in the proper use of the method.
Especially, two major operations unique to the procedure
need to be elaborated: a) Query formulation. How the set of
elementary queries is composed from a search topic? b)
Query optimization. What algorithm should be used for
combining the elementary queries to find the optimal query
for different operational levels?
3. The justification of the method. The appropriateness,
validity, reliability and efficiency of the method within the
specified domain must be justified.
The structure of this paper is the following: First, some basic
concepts and the procedure of the method are introduced.
Second, a case experiment is briefly reported to illustrate the
domain and the use of the proposed method in a concrete
experimental setting. Third, the other justification issues of the
method: validity, reliability and efficiency are discussed.
Several empirical tests were carried out to assess the potential
validity and reliability problems in applying the method.
2. OUTLINE FOR THE METHOD
The aim of this section is to introduce a sound theoretical
framework for the procedure of the method and to formulate
operational guidelines for exercising it.
2.1 Query Structures and Query Tuning Spaces
models address the issue of comparing a query as a
representation of a request for information with representations
of texts. The Boolean IR model supports rich query structures, a
binary representation of texts, and an exact match
technique for comparing queries and text representations [2].
A Boolean query consists of query terms and operators. Query
terms are usually words, phrases, or other character strings
typical of natural language texts. The Boolean query structures
are based on three logic connectives conjunction (),
disjunction (), negation ( ), and on the use of parentheses. A
query expresses the combination of terms that retrieved
documents have to contain. If we want to generate all possible
Boolean queries for a particular request, we have to identify all
query terms that might be useful, and to generate all logically
reasonable query structures.
Facet, as defined in section 1.2, is a very useful notion in
representing relationships between Boolean query structures
and the search topic. Terms within a facet are naturally
combined by disjunctions. Facets themselves present the
exclusive aspects of desired documents, and are naturally
combined by Boolean conjunction or negation. [9].
Expert searchers tend to formulate query plans applying the
notion of facet [9, 16]. Resulting query plans are usually in a
standard form, the conjunctive normal form (CNF) (for a formal
definition, see [1]). The structure of a Boolean query can be
easily characterized in CNF queries: Query exhaustivity (Exh) is
the number of facets that are exploited. Query extent (QE)
characterizes the broadness of a query, and can be measured,
e.g. as the average number of query terms per facet. For
instance, in the query plan designed by Harter Exh=2 and
QE=5.5 (see Fig. 1).
The changes made in query exhaustivity and extent to achieve
appropriate retrieval goals are called here query tuning. The
range within which query exhaustivity and query extent can
change sets the boundaries for query tuning. The set of all
elementary queries and their feasible combinations composed at
all available exhaustivity and extent levels form the query
tuning space.
In the example by Harter (Fig 1), seven different disjunctions of
query terms can be generated from facet A (=23-1) and 255 from
(=28-1). The total number of possible EQ combinations
is then 7 x 255 =1,785 at Exh= 2. In addition, 7 and 255 EQ
combinations can be formed at Exh=1 from facets A and B,
respectively. Thus, the total number of EQ combinations
creating the query tuning space across exhaustivity levels 1 and
2 for the sample query plan is 2,047.
1.2 The Procedure of the Method
The procedure of the proposed method consists of eight
operations at three stages:
STAGE I. INCLUSIVE QUERY PLANNING
1. Design inclusive query plans. Experienced searchers
formulate inclusive query plans for each given search topic.
It yields a comprehensive representation of the query tuning
space available for a search topic.
2. Execute extensive queries. The goal of extensive queries is
to gain reliable recall base estimates.
3. Determine the order of facets. The facet order of inclusive
query plans is determined by ranking the facets according to
their measured recall power, i.e. their capability to retrieve
relevant documents.
STAGE II. QUERY OPTIMISATION
4. Generate the set of elementary queries (EQ). Inclusive query
plans in the conjunctive normal form (CNF) at different
exhaustivity levels are transformed into the disjunctive
normal form (DNF) where the elementary conjunctions
create the set of elementary queries. All elementary queries
are executed to find the set of relevant and non-relevant
documents associated with each EQ.
5. Select standard points of operation (SPO). Both fixed recall
levels R0.1,,R1.0 and fixed document cut-off values, e.g.
DCV2, DCV5,,DCV500 may be used as SPOs.
6. Optimization of queries. An optimisation algorithm is used
to compose the combinations of EQs performing optimally at
each selected SPO.
STAGE III. EVALUATION OF RESULTS
7. Measure precision at each SPO. Precision can be used as a
performance measure. Precision is averaged over all search
topics at each SPO.
8. Analyse the characteristics of optimal queries. The optimal
queries are analysed to explain the changes in the
performance of an IR system.
The above steps describe the ordered set of operations
constituting the procedure of the proposed method. Inclusive
query planning (steps 1-3) and the search for the optimal set of
elementary queries (steps 4-6), are in the focus of this study.
1.3 Inclusive Query Planning
The techniques of query planning are routinely taught to novice
searchers [9, 16]. A common feature in different query planning
techniques is that they emphasize the analysis and identification
of searchable facets, and the representation of each facet as an
exhaustive disjunction of query terms. The goal of inclusive
query planning is similar, but the thoroughness of identification
task is stressed even more. In inclusive query planning, the goal
is to identify
1. all searchable facets of a search topic, and
2. all plausible query terms for each facet.
A major doubt in using human experts to design queries is
probably associated with the reliability of experimental designs.
For instance, the average inter-searcher overlap in selection of
query terms (measured character-by-character) is usually around
per cent [25]. Fortunately, the situation is not so bad when
are considered. For instance, in a study by Iivonen [12],
the average concept-consistency rose up to 88 per cent, and
experienced searchers were even more consistent. This indicates
that expert searchers are able to identify the facets of a topic
consistently although the overlap of queries at string level may
be low.
The identification of all plausible query terms for each
identified facet is another task requiring searching expertise.
Basically, the comprehensiveness of facet representations is
mostly a question of how much effort are used to identify
potential query terms. The query designer is freed from the
needs to make compromised query term selections typical of
practical search situations. The optimization operation will
automatically reject ill-behaving query terms. The process can
be improved by appropriate tools (dictionaries, thesauri,
browsing tools for database indexes, etc.
The final step is to decide the order of facets in the query plan.
In the case of a laboratory test collection, full relevance data (or
at least its justified estimate) is available. The facets of an
inclusive query plan can be ranked in the descending order of
recall. The disjunction of all query terms identified for a facet is
used to measure recall values.
1.4 Search for the Optimal Set of EQs
The size of the query tuning space increases exponentially as a
function of the number of EQs. We are obviously facing the risk
of combinatorial explosion since we do not know the upper
limit of query exhaustivity and, especially, query extent in
inclusive query plans. Solving the optimization problem by
blind search algorithms could lead to unmanageably long
running times. The search for the optimal set of EQs is a NP-hard
problem.
Harter [10] introduced a simple heuristic algorithm but he did
not define it formally. Query optimization resembles a
traditional integer programming case called the Knapsack
Problem. The problem is to fill a container with a set of items
so that the value of the cargo is maximized, and the weight limit
for the cargo is not exceeded [4]. The special case where each
item is selected once only (like EQs), is called the 0-1 Knapsack
Problem. Efficient approximation algorithms have been
developed to find a feasible lower bound for the optimum [17].
The problem of finding the optimal query from the query tuning
space can be formally defined by applying the definitions of the
Knapsack Problem as follows:
Select a set of EQs so as to
subject to nixi DCVj
1,if eqi isselected
0,otherwise
and
The above definition of the optimization problem is in its
maximization version. The number of relevant documents is
maximized while the total number of retrieved documents is
restricted by the given DCVj. In the minimization version of the
problem, the goal is to minimize the total number of documents
while requiring that the number of relevant documents exceeds
some minimum value (a fixed recall level).
Unfortunately, standard algorithms designed for physical
objects would not work properly with EQs. Different EQs tend
to overlap and retrieve at least some joint documents. This
means that, in a disjunction of elementary queries, the profit ri
and the weight ni of the elementary query eqi have
dynamically changing effective values that depend on the EQs
selected earlier. The effect of overlap in a combination of
several query sets is hard to predict.
A simple heuristic procedure for an incremental construction of
the optimal queries was designed applying the notion of
efficiency list [17]. The maximization version of the algorithm
contains seven steps:
Remove all elementary queries eqi
a) retrieving more documents than the upper limit for the
number of documents (i.e. ni > residual document cut-off
value DCV', starting from
b) retrieving no relevant documents (ri=0).
2. Stop, if no elementary queries eqi are available.
3. Calculate the efficiency list using precision values ri/ni for
remaining m elementary queries and sort elementary queries
in order of descending efficiency. In the case of equal values,
use the number of relevant documents (ri) retrieved as the
second sorting criterion.
4. Move eq1 at the top of the efficiency list to the optimal
query.
5. Remove all documents retrieved by eq1 from the result sets
of remaining elementary queries eq2, ., eqm.
6. Calculate the new value for free space DCV'.
7. Continue from step one.
The basic algorithm favors narrowly formulated EQs retrieving
a few relevant documents with high precision at the expense of
broader queries retrieving many relevant documents with
medium precision. The problem can be reduced by running the
optimization in an alternative mode differing only in step four
of the first iteration round: eqi retrieving the largest set of
relevant documents is selected from the efficiency list instead of
eq1. The alternative mode is called the largest first optimization
and the basic mode the precision first optimization.
3. A CASE EXPERIMENT
The goal of the case experiment was to elucidate the potential
uses of the proposed method, to clarify the types of research
questions that can be effectively solved by the method, and to
explicate the operational pragmatics of the method.
3.1 Research Questions
The case experiment focused on the mechanism of falling
effectiveness of Boolean queries in free-text searching of large-
full-text databases. The work was inspired by the debate
concerning the results of the STAIRS study [3, 22]. The goal
was to draw a more detailed picture of system performance and
optimal query structures in search situations typical of large
databases.
Assuming an ideally performing searcher, the main question
was: What is the difference in maximum performance of
Boolean queries between a small database and two types of
large databases? The large & dense database contained a larger
volume of documents than the small database but the density of
relevant documents (generality) was the same. In the large &
sparse database, both the volume of documents was higher and
the density of relevant documents was lower than in the small
database.
Twelve hypotheses were formulated concerning effectiveness,
exhaustivity and proportional query extent of queries in large
databases. For details, see [26].
3.2 Data and Methods
3.2.1 Optimization Algorithm
The optimization algorithm described in Section 2.5 was
programmed in C for Unix. Both a maximization version
exploiting a standard set of document cut-off values (DCV2,
DCV5,, DCV500) and a minimization version exploiting fixed
recall levels (R0.1R1.0) were implemented. At each SPO, the
iteration round (called optimization lap) was executed ten times
starting each round by selecting a different top EQ from the
efficiency list: five laps in the largest first mode, and five in the
precision first mode. The alternative results at a particular SPO
achieved by the algorithm in different optimization laps were
sorted to find the most optimal queries for further analysis.
3.2.2 Test Collection
The Finnish Full-Text Test Collection developed at the
University of Tampere was used in the case experiment [14].
The test database contains about 54,000 newspaper articles from
three Finnish newspapers. A set of 35 search topics are
available including verbal topic descriptions and relevance
assessments.
The test database is implemented for the TRIP retrieval
systems2. The test database played the role of the large & dense
database. Other databases, the small database and the large &
sparse database, were created through sampling from EQ result
sets. The large & sparse database was created by deleting about
% of the relevant documents, and the small database by
deleting about 80 % of all documents of the EQ result sets.
Thus, the EQ result sets for the small database contained the
same relevant documents as those for the large & sparse
database. Query optimization was done separately on these
three EQ data sets.
3.2.2.1 Inclusive Query Plans
The initial versions of inclusive query plans were designed by
an experienced search analyst working for three months on the
project. Query planning was an interactive process based on
thorough test queries and on the use of vocabulary sources.
Later parallel experiments (probabilistic queries) revealed that
the initial query plans failed to retrieve some relevant
documents. These documents were analyzed, and some new
query terms were added to represent the facets
comprehensively. The final inclusive query plans were capable
to retrieve 1270 (99,3 %) out of the 1278 known relevant
documents at exhaustivity level one.
In total, inclusive query plans contained 134 facets. The average
exhaustivity of query plans was 3.8 ranging from 2 to 5. The
total number of query terms identified was 2,330 (67 per query
plan and per facet). The number of terms ranged from 23 to
per query plan, and from 1 to 74 per facet. The wide
variation in the number of query terms per facet characterizes
the difference between specific concepts (e.g. named persons or
organizations) and general concepts (e.g., domains or
processes).
3.2.2.2 Data Collection and Analysis
Precision, query exhaustivity and query extent data were
collected for the optimal queries at SPOs. The sensitivity of
results to changes in search topic characteristics like the size of
a recall base, the number of facets identified, etc. were analyzed.
Also the searchable expressions referring to query plan facets
were identified in all relevant documents of a sample of test
topics to find explanations for the observed performance
differences. Statistical tests were applied to all major results.
3.3 Sample Results
Figures
3-5 summarize the comparisons between the small,
large & dense, and large & sparse databases: average precision,
exhaustivity and proportional extent of optimal queries at recall
levels R0.1-R1.0.3
The case experiment could reveal interesting performance
characteristics of Boolean queries in large databases. The
average precision across R0.1-R1.0 was about 13 % lower in the
2 TRIP by TietoEnator, Inc.
3 Proportional query extent (PQE) was measured only for high recall
and high precision searching because of research economical reasons.
PQE is the share of query terms actually used of the available terms
in inclusive query plans (average over facets).
Precision
0,00
Recall
Figure
3. Average precision at fixed recall levels in
optimal queries for small, large&dense and
large&sparse databases.
Small db
L&d db
L&s db
4,0
Exhaustivity
Small db
L&d db
L&s db
Recall
Figure
4. Exhaustivity of high recall queries
optimised for small, large&dense and large&sparse
databases.
Proportional query
extent
0,6
0,4
0,3
Small db
L&d db
L&s db
Recall
Figure
5. Proportional query extent (PQE) of
optimal queries in the small, large&dense, and
large&sparse databases.
large & dense database (database size effect), and about 40 %
lower in the large & sparse database (database size + density
effect) than in the small database (see Fig 3). The average
exhaustivity of optimal queries was higher in the large databases
than in the small one, but the level of precision could not be
maintained. Proportional query extent was highest in the large
dense database suggesting that more query terms are needed
per facet when a larger number of documents have to be
retrieved.
topics
The number of
Large recall base
Small recall base
Query exhaustivity
Figure
6. The number of search topics where full recall can
be achieved as a function of query exhaustivity in the small
and large recall bases (18 topics in total).
A very interesting deviation was identified in the precision and
exhaustivity curves at the highest recall levels. In the large &
dense database, the precision and exhaustivity of optimal
queries fell dramatically between R0.9 and R1.0.
The results of the facet analysis of all relevant documents in a
sample of test topics clarified the role of the recall base size
in falling effectiveness at R1.0. The more documents need to be
retrieved to achieve full recall, the more there occur relevant
documents where some query plan facets are expressed
implicitly. The results are presented in Fig 6. For Exh=1 full
recall was possible in all but one test topic for both recall bases.
At higher exhaustivity levels, the number of test topics where
full recall is possible fell much faster in the large recall base.
Above results are just examples from the case study findings to
illustrate the potential uses of the proposed method. High
precision searching was also studied by applying DCVs as
standard points of operation. It turned out, for instance, that the
database size alone does not induce efficiency problems at low
DCVs. On the contrary, highest precision was achieved in the
large & dense database. It was also shown that earlier results
indicating the superiority of proximity operators over the AND
operator in high precision searching are invalid. Queries
optimized separately for both operators show similar average
performance. For details, see [26].
4. JUSTIFICATION OF THE METHOD
Evaluation methods should themselves be evaluated in regard to
appropriateness, validity, reliability, and efficiency [24, 29].
The appropriateness of a method was verified in the case study
by showing that new results could be gained. Validity,
reliability, and efficiency are more complex issues to evaluate.
The main concerns were directed at the unique operations:
inclusive query planning and query optimization.
4.1 Facet Selection Test
Three subjects having good knowledge of text retrieval and
indexing were asked to make a facet identification test using a
sample of 14 test topics. The results showed that the
exhaustivity of inclusive query plans used in the case
experiment were not biased downwards (enough exhaustivity
tuning space). The test also verified earlier results that the
consistency in the selection of query facets is high between
search experts.
4.2 Facet Representation Test
The facet analysis of all relevant documents in the sample of
search topics showed that the original query designer had
missed or neglected about one third of the available expressions
in the relevant documents. However, the effect of missed query
terms was regarded as marginal since their occurrences in
documents mostly overlapped with other expressions already
covered by the query plan. The effect was shown to be much
smaller than the effect of implicit expressions. In the interactive
query optimization test (see next section), precision was
observed to drop less than 4 %.
4.3 Interactive Query Optimization Test
The idea of the interactive query optimization test was to
replace the automatic optimization operation by an expert
searcher, and compare the achieved performance levels as well
as query structures. A special WWW-based tool, the IR Game
[27], designed for rapid analysis of query results was used in
this test. When interfaced to a laboratory test collection, the tool
offers immediate performance feedback at the level of
individual queries in the form of recall-precision curves, and a
visualization of actual query results. The searcher is able to
study, in a convenient and effortless way, the effects of query
changes.
An experienced searcher was recruited to run the interactive
query optimization test. A group of three control searchers were
used to test the overall capability of the test searcher. The test
searcher was working for a period of 1.5 months trying to find
optimal queries for the sample of test topics for which the
full data of facet analysis was available. In practice, the test
searcher did not face any time constraints.
The results showed that the algorithm was performing better
than or equally with the test searcher in 98 % out of the 198 test
cases. This can be regarded as an advantageous result for a first
version of a heuristic algorithm.
4.4 Efficiency of the Method
The investment in inclusive query planning was justified to be
reasonable in the context of a test collection. It was also shown
that the growth of running time of the optimization algorithm
can be characterized by O(n log n), and that it is manageable
for all EQ sets of finite size.
5. CONCLUSIONS AND DISCUSSION
The main goal of this study was to design, demonstrate and
evaluate a new evaluation method for measuring the
performance of Boolean queries across a wide operational
range. Three unique characteristics of the method help to
comprehend its potential:
1. Performance can be measured at any selected point across
the whole operational range, and different standard points of
operation (SPO) may be applied.
2. Queries under consideration estimate optimal performance at
each SPO, and query structures are free to change within the
defined query tuning space in search of the optimum.
3. The expertise of professional searchers could be brought into
a system-oriented evaluation framework in a controlled way.
The domain of the method can be characterized by
illustrating the kinds of research variables that can be
appropriately studied by applying the method. Query
precision, exhaustivity and extent are used as dependent
variables, and the standard points of operation as the control
variable. Independent variables may relate to:
1. documents (e.g. type, length, degree of relevance)
2. databases (e.g. size, density)
3. database indexes (e.g. type of indexing, linguistic
normalization of words)
4. search topics (e.g. complexity, broadness, type)
5. matching operations (e.g. different operators).
The proposed method offers clear advantages over traditional
evaluation methods. It helps to acquire new information about
the phenomena observed and challenge present findings because
it is more accurate (averaging at defined SPOs). The method is
also economical in experiments where a complex query tuning
space is studied. The query tuning space contains all potential
candidates for optimal queries, but data are collected only on
those queries that turn out to be optimal at a particular SPO.
The proposed method yielded two major innovations: inclusive
query planning, and query optimization. The former innovation
is more universal since it can be used both in Boolean as well as
in best match experiments, see [14]. The query optimization
operation in the proposed form is restricted to the Boolean IR
model since it presumes that the query results are distinct sets.
The inclusive query planning idea is easier to exploit since its
outcome, the representation of the available query tuning space,
can also be exploited in experiments on best-match IR systems.
Traditional test collections were provided with complete
relevance data. Inclusive query plans are a similar data set that
can be used in measuring ultimate performance limits of
different matching algorithms. Inclusive query plans help also
in categorizing test topics according to their properties, e.g.
complex vs. simple (exhaustivity tuning dimension), and broad
vs. narrow (extent tuning dimension). This opens a way to
create experimental settings that are more sensitive to
situational factors, the issue that has been raised in the
Boolean/best-match comparisons [11, 20].
6.
ACKNOWLEDGMENTS
I am grateful to my supervisor Kalervo Jrvelin, and to the
FIRE group: Heikki Keskustalo, Jaana Keklinen, and others.
7.
--R
Logic and Boolean algebra.
Retrieval Techniques.
An evaluation of retrieval effectiveness for a full-text document retrieval system
The Cranfield tests on index language devices.
Searcher's Selection of Search Keys.
Boolean
The First Text Retrieval Conference (TREC-1)
Online Information retrieval.
Search Term Combinations and Retrieval Overlap: A Proposed Methodology and Case Study.
An Evaluation of Interactive Boolean and Natural Language Searching with Online Medical Textbook.
Consistency in the selection of search
An Introduction to Algorithmic and Cognitive Approaches for Information Retrieval.
Information Retrieval Systems:
Information Retrieval Today.
Knapsack Problems.
The Medline Full-Text Project
Heuristic programming: Ill-structured problems
Freestyle vs. Boolean: A comparison of partial and exact match retrieval systems.
A new comparison between conventional indexing (MEDLARS) and automatic text processing (SMART).
--TR
An evaluation of retrieval effectiveness for a full-text document-retrieval system
Another look at automatic text-retrieval systems
Retrieval techniques
Knapsack problems: algorithms and computer implementations
The pragmatics of information retrieval experimentation, revisited
Natural language vs. Boolean query evaluation
Consistency in the selection of search concepts and search terms
An evaluation of interactive Boolean and natural language searching with an online medical textbook
Evaluation of evaluation in information retrieval
A deductive data model for query expansion
Freestyle vs. Boolean
Boolean search
Information Retrieval Experiment
Information Retrieval Today
Introduction to Modern Information Retrieval
Online Information Retrieval
--CTR
Eero Sormunen, Extensions to the STAIRS StudyEmpirical Evidence for the Hypothesised Ineffectiveness of Boolean Queries in Large Full-Text Databases, Information Retrieval, v.4 n.3-4, p.257-273, September-December 2001
Caroline M. Eastman , Bernard J. Jansen, Coverage, relevance, and ranking: The impact of query operators on Web search engine results, ACM Transactions on Information Systems (TOIS), v.21 n.4, p.383-411, October | structured queries;test collections;evaluation general;testing methodology |
345772 | Making Nondeterminism Unambiguous. | We show that in the context of nonuniform complexity, nondeterministic logarithmic space bounded computation can be made unambiguous. An analogous result holds for the class of problems reducible to context-free languages. In terms of complexity classes, this can be stated as NL/poly | Introduction
In this paper, we combine two very useful algorithmic techniques (the inductive
counting technique of [Imm88, Sze88] and the isolation lemma of [MVV87]) to
give a simple proof that two fundamental concepts in complexity theory coincide
in the context of nonuniform computation.
Unambiguous computation has been the focus of much attention over the
past three decades. Unambiguous context-free languages form one of the most
important subclasses of the class of context-free languages. The complexity class
UP was first defined and studied by Valiant [Val76]; a necessary precondition
for the existence of one-way functions is for P to be properly contained in UP
Supported in part by the DFG Project La 618/3-1 KOMET.
y Supported in part by NSF grant CCR-9509603. This work was performed while this
author was a visiting scholar at the Wilhelm-Schickard Institut f?r Informatik, Universit?t
T-ubingen, supported by DFG grant TU 7/117-1
[GS88]. Although UP is one of the most intensely-studied subclasses of NP, it is
neither known nor widely-believed that UP contains any sets that are hard for
NP under any interesting notion of reducibility. (Although Valiant and Vazirani
showed that "Unique.Satisfiability" is hard for NP under probabilistic reductions
[VV86], the language Unique.Satisfiability is not in UP unless
Nondeterministic and unambiguous space-bounded computation have also
been the focus of much work in computer science. Nondeterministic logspace
(NL) captures the complexity of many natural computational problems. The
proof that NL is closed under complementation [Imm88, Sze88] answered the
long-standing open question of whether the complement of every context-sensitive
language is context-sensitive. It remains an open question if every context-sensitive
language has an unambiguous grammar. The unambiguous version of
NL, denoted UL, was first explicitly defined and studied in [BJLR92, AJ93]. A
language A is in UL if and only if there is a nondeterministic logspace machine
accepting A such that, for every x, M has at most one accepting computation
on input x.
Motivated in part by the question of whether a space-bounded analog of
the result of [VV86] could be proved, Wigderson [Wig94, GW96] proved the
inclusion NL/poly ' \PhiL/poly. This is a weaker statement than NL ' \PhiL,
which is still not known to hold. \PhiL is the class of languages A for which there
is a nondeterministic logspace bounded machine M such that x 2 A if and only
M has an odd number of accepting computation paths on input x. Given any
complexity class C, C/poly is the class of languages A for which there exists a
sequence of "advice strings" fff(n) j n 2 Ng and a language B 2 C such that
if and only if (x; ff(jxj)) 2 B. Classes of the form C provide a simple link
between (nonuniform) circuit complexity classes, and machine-based complexity
classes (such as P, NP, NL, \PhiL, etc.) that have natural characterizations in
terms of uniform circuit families.
(It is worth emphasizing that, in showing the equality UL/poly = NL/poly,
we must show that for every B in NL/poly, there is a nondeterministic logspace
machine M that never has more than one accepting path on any input, and
there is an advice sequence ff(n) such that M (x; ff(jxj)) accepts if and only
This is stronger than merely saying that there is an advice sequence
ff(n) and a nondeterministic logspace machine such that M (x; ff(jxj)) never has
more than one accepting path, and it accepts if and only if x 2 B.)
In the proof of the main result of [Wig94, GW96], Wigderson observed that
a simple modification of his construction produces graphs in which the shortest
distance between every pair of nodes is achieved by a unique path. We will refer
to such graphs in the following as min-unique graphs. Wigderson wrote: "We
see no application of this observation." The proof of our main result is just such
an application.
The s-t connectivity problem takes as input a directed graph with two distinguished
vertices s and t, and determines if there is a path in the graph from s
to t. It is well-known that this is a complete problem for NL [Jon75].
The following lemma is implicit in [Wig94, GW96], but for completeness we
make it explicit here.
Lemma 2.1 There is a logspace-computable function f and a sequence of "ad-
vice strings" fff(n) j n 2 Ng (where jff(n)j is bounded by a polynomial in n)
with the following properties:
ffl For any graph G on n vertices, f(G;
ffl For each i, the graph G i has an s-t path if and only if G has an s-t path.
ffl If G has an s-t path then there is some i such that G i is a min-unique
graph.
Proof: We first observe that a standard application of the isolation lemma
technique of [MVV87] shows that, if each edge in G is assigned a weight in the
range [1; 4n 4 ] uniformly and independently at random, then with probability
at least 3
4 , for any two vertices x and y such that there is a path from x to
y, there is only one path having minimum weight. (Sketch: The probability
that there is more than one minimum weight path from x to y is bounded by
the sum, over all edges e, of the probability of the event Bad(e; x; y) ::= "e
occurs on one minimum-weight path from x to y and not in another". Given
any weight assignment w 0 to the edges in G other than e, there is at most
one value z with the property that, if the weight of e is set to be z, then
occurs. Thus the probability that there are two minimum-weight
paths between two vertices is bounded by
x;y;e
x;y;e
Our advice string ff will consist of a sequence of n 2 weight functions, where
each weight function assigns a weight in the range [1; 4n 4 ] to each edge. (There
are strings possible for each n.) Our logspace-
computable function f takes as input a graph G and a sequence of n 2 weight
functions, and produces as output a sequence of graphs hG
graph G i is the result of replacing each edge in G by a path of length
j from x to y, where j is the weight given to e by the i-th weight function in the
advice string. Note that, if the i-th weight function satisfies the property that
there is at most one minimum weight path between any two vertices, then G i
is a min-unique graph. (It suffices to observe that, for any two vertices x and y
of G i , there are vertices x 0 and y 0 such that
are vertices of the original graph G, and they lie on every path
between x and y,
ffl there is only one path from x to x 0 , and only one path from y 0 to y, and
ffl the minimum weight path from x to y is unique.)
Let us call an advice string "bad for G" if none of the graphs G i in the
sequence f(G) is a min-unique graph. For each G, the probability that a
randomly-chosen advice string ff is bad is bounded by (probability that G i is
not min-unique) n 2
. Thus the total number of advice strings
that are bad for some G is at most 2 n 2
A(n). Thus there is some
advice string ff(n) that is not bad.
Theorem 2.2 NL'UL/poly
Proof: It suffices to present a UL/poly algorithm for the s-t connectivity
problem.
We show that there is a nondeterministic logspace machine M that takes as
input a sequence of digraphs hG and processes each G i in sequence,
with the following properties:
ffl If G i is not min-unique, M has a unique path that determines this fact
and goes on to process G other paths are rejecting.
ffl If G i is a min-unique graph with an s-t path, then M has a unique accepting
path.
ffl If G i is a min-unique graph with no s-t path, then M has no accepting
path.
Combining this routine with the construction of Lemma 2.1 yields the desired
UL/poly algorithm.
Our algorithm is an enhancement of the inductive counting technique of
[Imm88] and [Sze88]. We call this the double counting technique since in each
stage we count not only the number of vertices having distance at most k from
the start vertex, but also the sum of the lengths of the shortest path to each such
vertex. In the following description of the algorithm, we denote these numbers
by c k and \Sigma k , respectively.
Let us use the notation d(v) to denote the length of the shortest path in a
graph G from the start vertex to v. (If no such path exists, then
Thus, using this notation, \Sigma
fxjd(x)-kg d(x).
A useful observation is that if the subgraph of G having distance at most k
from the start vertex is min-unique (and if the correct values of c k and \Sigma k are
provided), then an unambiguous logspace machine can, on input (G; k; c k ; \Sigma k ; v),
compute the Boolean predicate "d(v) - k". This is achieved with the routine
shown in Figure 1.
To see that this routine truly is unambiguous if the preconditions are met,
note the following:
precisely, our routine will check if, for every vertex x, there is at most one minimal-
length path from the start vertex to x. This is sufficient for our purposes. A straightforward
modification of our routine would provide an unambiguous logspace routine that will determine
if the entire graph G i is a min-unique graph.
Input (G; k; c k ; \Sigma k ; v)
count := 0; sum := 0; path.to.v := false;
for each x 2 V do
Guess nondeterministically if d(x) - k.
if the guess is d(x) - k then
begin
Guess a path of length l - k from s to x
(If this fails, then halt and reject).
count := count +1; sum := sum +l;
endfor
then return the Boolean value of path.to.v
else halt and reject
end.procedure
Figure
1: An unambiguous routine to determine if d(v) - k.
ffl If the routine ever guesses incorrectly for some vertex x that d(x) ? k,
then the variable count will never reach c k and the routine will reject.
Thus the only paths that run to completion guess correctly exactly the
set fx j d(x) - kg.
ffl If the routine ever guesses incorrectly the length l of the shortest path to
x, then if d(x) ? l no path of length l will be found, and if d(x) ! l then
the variable sum will be incremented by a value greater than d(x). In the
latter case, at the end of the routine, sum will be greater than \Sigma k , and
the routine will reject.
Clearly, the subgraph having distance at most 0 from the start vertex is
min-unique, and c part of the construction involves
computing c k and \Sigma k from c k\Gamma1 and \Sigma k\Gamma1 , at the same time checking that the
subgraph having distance at most k from the start vertex is min-unique. It is
easy to see that c k is equal to c k\Gamma1 plus the number of vertices having
Note that only if it is not the case that d(v) - there
is some edge (x; v) such that d(x) - k \Gamma 1. The graph fails to be a min-unique
graph if and only if there exist some v and x as above, as well as some other
and there is an edge v). The code shown in
Figure
2 formalizes these considerations.
Searching for an s-t path in graph G is now expressed by the routine shown
in
Figure
3.
Given the sequence hG the routine processes each G i in turn. If
G i is not min-unique (or more precisely, if the subgraph of G i that is reachable
from the start vertex is not a min-unique graph), then one unique computation
path of the routine returns the value BAD.GRAPH and goes on to process G
Input (G; k; c
Output also the flag BAD.GRAPH
for each vertex v do
for each x such that (x; v) is an edge do
begin
for x 0 6= x do
if is an edge and d(x 0
then BAD.GRAPH := true:
endfor
endfor
endfor At this point, the values of c k and \Sigma k are correct.
Figure
2: Computing c k and \Sigma k .
Input (G)
repeat
compute c k and \Sigma k from (c
until c
false then there is an s-t path in G if and only if d(t) - k.
Figure
3: Finding an s-t path in a min-unique graph.
all other computation paths halt and reject. Otherwise, if G i is min-unique, the
routine has a unique accepting path if G i has an s-t path, and if this is not the
case the routine halts with no accepting computation paths.
Corollary 2.3 NL/poly = UL/poly
LogCFL is the class of problems logspace-reducible to a context-free language.
Two important and useful characterizations of this class are summarized in
the following proposition. (SAC 1 and AuxPDA(log n; n O(1) ) are defined in the
following paragraphs.)
Proposition 3.1
An Auxiliary Pushdown Automaton (AuxPDA) is a nondeterministic Turing
machine with a read-only input tape, a space-bounded worktape, and a
pushdown store that is not subject to the space-bound. The class of languages
accepted by Auxiliary Pushdown Automata in space s(n) and time t(n) is denoted
by AuxPDA(s(n); t(n)). If an AuxPDA satisfies the property that, on
every input x, there is at most one accepting computation, then the AuxPDA
is said to be unambiguous. This gives rise to the class UAuxPDA(s(n); t(n)).
SAC 1 is the class of languages accepted by logspace-uniform semi-unbounded
circuits of depth O(log n); a circuit family is semi-unbounded if the AND gates
have fan-in 2 and the OR gates have unbounded fan-in.
Not long after NL was shown to be closed under complementation [Imm88,
Sze88], LogCFL was also shown to be closed under complementation in a proof
that also used the inductive counting technique ([BCD 89]). A similar history
followed a few years later: not long after it was shown that NL is contained
in \PhiL/poly [Wig94, GW96], the isolation lemma was again used to show that
LogCFL is contained in \PhiSAC 1 /poly [G'al95, GW96]. (As is noted in [GW96],
this was independently shown by H. Venkateswaran.)
In this section, we show that the same techniques that were used in Section
2 can be used to prove an analogous result about LogCFL. (In fact, it would
also be possible to derive the result of Section 2 from a modification of the proof
of this section. Since some readers may be more interested in NL than LogCFL,
we have chosen to present a direct proof of NL/poly = UL/poly.) The first
step is to state the analog to Lemma 2.1. Before we can do that, we need some
definitions.
A weighted circuit is a semiunbounded circuit together with a weighting
function that assigns a nonnegative integer weight to each wire connecting any
two gates in the circuit.
Let C be a weighted circuit, and let g be a gate of C. A certificate for
(in C) is a list of gates, corresponding to a depth-first search of
the subcircuit of C rooted at g. The weight of a certificate is the sum of the
weights of the edges traversed in the depth-first search. This informal definition
is made precise by the following inductive definition. (It should be noted that
this definition differs in some unimportant ways from the definition given in
[G'al95, GW96].)
ffl If g is a constant 1 gate or an input gate evaluating to 1 on input x, then
the only certificate for g is the string g. This certificate has weight 0.
ffl If g is an AND gate of C with inputs h 1 and h 2 (where h 1 lexicographically
precedes h 2 ), then any string of the form gyz is a certificate for g, where y
is any certificate for h 1 , and z is any certificate for h 2 . If w i is the weight
of the edge connecting h i to g, then the weight of the certificate yz is
plus the sum of the weights of certificates y and z.
ffl If g is an OR gate of C, then any string of the form gy is a certificate for
g, where y is any certificate for a gate h that is an input to g in C. If w is
the weight of the edge connecting h to g, then the weight of the certificate
gy is w plus the weight of certificate y.
Note that if C has logarithmic depth d, then any certificate has length bounded
by a polynomial in n and has weight bounded by 2 d times the maximum weight
of any edge. Every gate that evaluates to 1 on input x has a certificate, and no
gate that evaluates to 0 has a certificate.
We will say that a weighted circuit C is min-unique on input x if, for every
gate g that evaluates to 1 on input x, the minimal-weight certificate for
is unique.
Lemma 3.2 For any language A in LogCFL, there is a sequence of advice
strings ff(n) (having length polynomial in n) with the following properties:
ffl Each ff(n) is a list of weighted circuits of logarithmic depth hC
ffl For each input x and for each i, x 2 A if and only if C i
ffl For each input x, if x 2 A, then there is some i such that C i is min-unique
on input x.
Lemma 3.2 is in some sense implicit in [G'al95, GW96]. We include a proof
for completeness.
Proof: Let A be in LogCFL, and let C be the semiunbounded circuit of size
recognizing A on inputs of length n.
As in [G'al95, GW96], a modified application of the isolation lemma technique
of [MVV87] shows that, for each input x, if each wire in C is assigned a
weight in the range [1; 4n 3k ] uniformly and independently at random, then with
probability at least 3
is min-unique on input x. (Sketch: The probability
that there is more than one minimumweight certificate for
by the sum, over all wires e, of the probability of the event Bad(e; g) ::= "e
occurs in one minimum-weight certificate for not in another".
Given any weight assignment w 0 to the edges in C other than e, there is at
most one value z with the property that, if the weight of e is set to be z, then
Bad(e; g) occurs. Thus the probability that there are two minimum-weight
certificates for any gate in C is bounded by
g;e
g;e
g;e
Now consider sequences fi consisting of n weight functions hw wn i,
where each weight function assigns a weight in the range [1; 4n 3k ] to each edge
of C. (There are
such sequences possible for each n.) There must
exist a string fi such that, for each input x of length n, there is some i - n such
that the weighted circuit C i that results by applying weight function w i to C is
min-unique on input x. (Sketch of proof: Let us call a sequence fi "bad for x"
if none of the circuits C i in the sequence is min-unique on input x. For each x,
the probability that a randomly-chosen fi is bad is bounded by (probability that
C i is not min-unique) n - . Thus the total number of sequences
that are bad for some x is at most 2 n (2 \Gamma2n 2
B(n). Thus there is some
sequence fi that is not bad.)
The desired advice sequence formed by taking a
good sequence wn i and letting C i be the result of applying weight
function w i to C.
Theorem 3.3 LogCFL ' UAuxPDA(log n; n O(1) )/poly.
Proof: Let A be a language in LogCFL. Let x be a string of length n, and let
be the advice sequence guaranteed by Lemma 3.2.
We show that there is an unambiguous auxiliary pushdown automaton M
that runs in polynomial time and uses logarithmic space on its worktape that,
given a sequence of circuits as input, processes each circuit in turn, and has the
following properties:
ffl If C i is not min-unique on input x, then M has a unique path that determines
this fact and goes on to process C other paths are rejecting.
ffl If C i is min-unique on input x and evaluates to 1 on input x, then M has
a unique accepting path.
ffl If C i is min-unique on input x but evaluates to zero on input x, then M
has no accepting path.
Our construction is similar in many respects to that of Section 2. Given a
circuit C, let c k denote the number of gates g that have a certificate for
of weight at most k, and let \Sigma k be the sum, over all gates g having a certificate
for of weight at most k, of the minimum-weight certificate of g. (Let
W (g) denote the weight of the minimum-weight certificate of
a certificate exists, and let this value be 1 otherwise.)
A useful observation is that if all gates of C having certificates of weight
at most k have unique minimal-weight certificates (and if the correct values
of c k and \Sigma k are provided), then an unambiguous AuxPDA can, on input
compute the Boolean value of the predicate "W
k". This is achieved with the routine shown in Figure 4.
Input (C;
count := 0; sum := 0; a := 1;
for each gate h do
Guess nondeterministically if W (h) - k.
if the guess is W (h) - k then
begin
Guess a certificate of size l - k for h
(If this fails, then halt and reject).
count := count +1; sum := sum +l;
then a := l;
endfor
then return a
else halt and reject
end.procedure
Figure
4: An unambiguous routine to calculate W (g) if W (g) - k and return
To see that this routine truly is unambiguous if the preconditions are met,
note the following:
ffl If the routine ever guesses incorrectly for some gate h that W (h) ? k,
then the variable count will never reach c k and the routine will reject.
Thus the only paths that run to completion guess correctly exactly the
set fh j W (h) - kg.
ffl For each gate h such that W (h) - k, there is exactly one minimal-weight
certificate that can be found. An UAuxPDA will find this certificate using
its pushdown to execute a depth-first search (using nondeterminism at the
gates, and using its O(log n) workspace to compute the weight of the
certificate), and only one path will find the the minimal-weight certificate.
If, for some gate h, a certificate of weight greater than W (h) is guessed,
then the variable sum will not be equal to \Sigma k at the end of the routine,
and the path will halt and reject.
Clearly, all gates at the input level have unique minimal-weight certificates
(and the only gates g with W are at the input level). Thus we can set
each input bit and its negation are provided, along with the
constant 1), and \Sigma part of the construction involves computing
c k and \Sigma k from (c at the same time checking that no gate has two
minimal-weight certificates of weight k. Consider each gate g in turn. If g is an
AND gate with inputs h 1 and h 2 and weights w 1 and w 2 connecting g to these
inputs, then W (g) - k if and only if (W
is an OR gate, then it suffices to check, for
each gate h that is connected to g by an edge of weight w, if (W
or ((W (g) one such gate is found, then
if two such gates are found, then the circuit is not min-unique on
input x. If no violations of this sort are found for any k, then C is min-unique
on input x. The code shown in Figure 5 formalizes these considerations.
Input (C; x; k; c
Output also the flag BAD.CIRCUIT
for each gate g do
begin
if g is an AND gate with inputs h connected to g
with edges weighted w
if g is an OR gate then
for each h connected to g by an edge weighted w do
begin
for h 0 6= h connected to g by an edge of weight w 0 do
then BAD.CIRCUIT := true:
endfor
endfor
endfor
At this point, if false, the values of c k and \Sigma k are correct.
Figure
5: Computing c k and \Sigma k .
Evaluating a given circuit C i is now expressed by the routine shown in Figure
6.
Given the sequence hC the routine processes each C i in turn. If
C i is not is min-unique on input x, then one unique computation path of the
routine returns the value BAD.CIRCUIT and goes on to process C
computation paths halt and reject. Otherwise, the routine has a unique accepting
path if C i and if this is not the case the routine halts with no
accepting computation paths.
Corollary 3.4 LogCFL/poly = UAuxPDA(log n; n O(1) )/poly.
Input
compute
true, then exit the for loop.
endfor
if the output gate g evaluates to 1, then it has
a unique minimal-weight certificate of some weight l.
Accept if and only if W (g)
Figure
Evaluating a circuit.
4 Discussion and Open Problems
Rytter [Ryt87] (see also [RR92]) showed that any unambiguous context-free
language can be recognized in logarithmic time by CREW-PRAM. In contrast,
no such CREW algorithm is known for any problem complete for NL, even
in the nonuniform setting. The problem is that, although NL is the class of
languages reducible to linear context-free languages, and although the class of
languages accepted by deterministic AuxPDAs in logarithmic space and polynomial
time coincides with the class of languages logspace-reducible to deterministic
context-free languages, and LogCFL coincides with AuxPDA(log n; n O(1) ),
it is not known that UAuxPDA(log n; n O(1) ) or UL is reducible to unambiguous
context-free languages. The work of Niedermeier and Rossmanith does an
excellent job of explaining the subtleties and difficulties here [NR95]. CREW
algorithms are closely associated with a version of unambiguity called strong
unambiguity. In terms of Turing-machine based computation, strong unambiguity
means that, not only is there at most one path from the start vertex to
the accepting configuration, but in fact there is at most one path between any
two configurations of the machine.
Strongly unambiguous algorithms have more efficient algorithms than are
known for general NL or UL problems. It is shown in [AL96] that problems in
Strongly unambiguous logspace have deterministic algorithms using less than
log 2 n space.
The reader is encouraged to note that, in a min-unique graph, the shortest
path between any two vertices is unique. This bears a superficial resemblance to
the property of strong unambiguity. We see no application of this observation.
It is natural to ask if the randomized aspect of the construction can be
eliminated using some sort of derandomization technique to obtain the equality
A corollary of our work is that UL/poly is closed under complement. It
remains an open question if UL is closed under complement, although some of
the unambiguous logspace classes that can be defined using strong unambiguity
are known to be closed under complement [BJLR92].
It is disappointing that the techniques used in this paper do not seem to
provide any new information about complexity classes such as NSPACE(n) and
It is straightforward to show that NSPACE(s(n)) is contained in
USPACE(s(n))=2 O(s(n)) , but this is interesting only for sublinear s(n).
There is a natural class of functions associated with NL, denoted FNL [AJ93].
This can be defined in several equivalent ways, such as
ffl The class of functions computable by NC 1 circuits with oracle gates for
problems in NL.
ffl The class of functions f such that f(x; the i-th bit of f(x) is bg is in
NL.
ffl The class of functions computable by logspace-bounded machines with
oracles for NL.
Another important class of problems related to NL is the class #L, which counts
the number of accepting paths of a NL machine. #L characterizes the complexity
of computing the determinant [Vin91]. (See also [Tod, Dam, MV97, Val92,
AO96].) It was observed in [AJ93] that if then FNL is contained in
#L. Thus a corollary of the result in this paper is that FNL/poly ' #L/poly.
Many questions about #L remain unanswered. Two interesting complexity
classes related to #L are PL (probabilistic logspace) and C=L (which characterizes
the complexity of singular matrices, as well as questions about computing
the rank). It is known that some natural hierarchies defined using these complexity
classes collapse:
In contrast, the corresponding #L hierarchy (equal to the class of problems AC 0
reducible to computing the determinant) AC 0
is not known
to collapse to any fixed level. Does the equality UL/poly = NL/poly provide
any help in analyzing this hierarchy in the nonuniform setting?
Acknowledgment
We thank Klaus-J-orn Lange for helpful comments, and
for drawing our attention to min-unique graphs, and for arranging for the second
author to spend some of his sabbatical time in T-ubingen. We also thank V.
Vinay and Lance Fortnow for insightful comments.
--R
The complexity of matrix rank and feasible systems of linear equations.
Relationships among PL
Two applications of inductive counting for complementation problems.
Circuits over PP and PL.
Unambiguity and fewness for logarithmic space.
Complexity measures for public-key cryptosystems
Boolean vs. arithmetic complexity classes: randomized reductions.
Nondeterministic space is closed under complement.
Space bounded reducibility among combinatorial prob- lems
A combinatorial algorithm for the deter- minant
Matching is as easy as matrix inversion.
Unambiguous auxiliary push-down automata and semi-unbounded fan-in circuits
The PL hierarchy collapses.
Observations on log n time parallel recognition of unambiguous context-free languages
Parallel time O(logn) recognition of unambiguous context-free languages
On the tape complexity of deterministic context-free languages
The method of forced enumeration for nondeterministic automata.
Counting problems computationally equivalent to the de- terminant
The relative complexity of checking and evaluating.
Why is Boolean complexity theory difficult?
Counting auxiliary pushdown automata and semi- unbounded arithmetic circuits
NP is as easy as detecting unique solu- tions
NL/poly
--TR
--CTR
Martin Sauerhoff, Approximation of boolean functions by combinatorial rectangles, Theoretical Computer Science, v.301 n.1-3, p.45-78, 14 May
Allender , Meena Mahajan, The complexity of planarity testing, Information and Computation, v.189 n.1, p.117-134, 25 February 2004
Allender, NL-printable sets and nondeterministic Kolmogorov complexity, Theoretical Computer Science, v.355 n.2, p.127-138, 11 April 2006 | ULOG;NLOG;nondeterministic space;unambiguous computation;LogCFL |
345773 | A Combinatorial Consistency Lemma with Application to Proving the PCP Theorem. | The current proof of the probabilistically checkable proofs (PCP) theorem (i.e., ${\cal NP}={\cal PCP}(\log,O(1))$) is very complicated. One source of difficulty is the technically involved analysis of low-degree tests. Here, we refer to the difficulty of obtaining strong results regarding low-degree tests; namely, results of the type obtained and used by Arora and Safra [J. ACM, 45 (1998), pp. 70--122] and Arora et al. [J. ACM, 45 (1998), pp. 501--555].In this paper, we eliminate the need to obtain such strong results on low-degree tests when proving the PCP theorem. Although we do not remove the need for low-degree tests altogether, using our results it is now possible to prove the PCP theorem using a simpler analysis of low-degree tests (which yields weaker bounds). In other words, we replace the strong algebraic analysis of low-degree tests presented by Arora and Safra and Arora et al. by a combinatorial lemma (which does not refer to low-degree tests or polynomials). | Introduction
The characterization of NP in terms of Probabilistically Checkable Proofs (PCP systems) [AS,
ALMSS], hereafter referred to as the PCP Characterization Theorem, is one of the more fundamental
achievements of complexity theory. Loosely speaking, this theorem states that membership in
any NP-language can be verified probabilistically by a polynomial-time machine which inspects
a constant number of bits (in random locations) in a "redundant" NP-witness. Unfortunately,
the current proof of the PCP Characterization Theorem is very complicated and, consequently, it
has not been assimilated into complexity theory. Clearly, changing this state of affairs is highly
desirable.
There are two things which make the current proof (of the PCP Characterization Theorem)
difficult. One source of difficulty is the complicated conceptual structure of the proof (most notably
the acclaimed 'proof composition' paradigm). Yet, with time, this part seems easier to understand
and explain than when it was first introduced. Furthermore, the Proof Composition Paradigm
turned out to be very useful and played a central role in subsequent works in this area (cf., [BGLR,
BS, BGS, H96]). The other source of difficulty is the technically involved analysis of low-degree
tests. Here we refer to the difficulty of obtaining strong results regarding low-degree tests; namely,
results of the type obtained and used in [AS] and [ALMSS].
In this paper, we eliminate the latter difficulty. Although we do not get rid of low-degree
tests altogether, using our results it is now possible to prove the PCP Characterization Theorem
using only the weaker and simpler analysis of low-degree tests presented in [GLRSW, RS92, RS96].
In other words, we replace the complicated algebraic analysis of low-degree tests presented in
[AS, ALMSS] by a combinatorial lemma (which does not refer to low-degree tests or even to
polynomials). We believe that this combinatorial lemma is very intuitive and find its proof much
simpler than the algebraic analysis of [AS, ALMSS]. (However, simplicity may be a matter of
taste.)
Loosely speaking, our combinatorial lemma provides a way of generating sequences of pairwise
independent random points so that any assignment of values to the sequences must induce consistent
values on the individual elements. This is obtained by a "consistency test" which samples a constant
number of sequences. We stress that the length of the sequences as well as the domain from which
the elements are chosen are parameters, which may grow while the number of samples remains
fixed.
1.1 Two Combinatorial Consistency Lemmas
The following problem arises frequently when trying to design PCP systems, and in particular when
proving the PCP Characterization Theorem. For some sets S and V , one has a procedure, which
given (bounded) oracle access to any function f : S 7! V , tests if f has some desired property.
Furthermore, in case f is sufficiently bad (i.e., far from any function having the property), the test
detects this with "noticeable" probability. For example, the function f may be the proof-oracle
in a basic PCP system which we want to utilize (as an ingredient in the composition of PCP
systems). The problem is that we want to increase the detection probability (equivalently, reduce
the error probability) without increasing the number of queries, although we are willing to allow
more informative queries. For example, we are willing to allow queries in which one supplies a
sequence of elements in S and expects to obtain the corresponding sequence of values of f on these
elements. The problem is that the sequences of values obtained may not be consistent with any
We can now phrase a simple problem of testing consistency. One is given access to a function
and is asked whether there exists a function f : S 7! V so that for most sequences
Loosely speaking, we prove that querying F on a constant number of related random sequences
suffices for testing a relaxion of the above. That is,
Lemma 1.1 (combinatorial consistency - simple case): For every ffi ? 0, there exists a constant
poly(1=ffi) and a probabilistic oracle machine, T , which on input ('; jSj) runs for poly(' \Delta log jSj)-
time and makes at most c queries to an oracle F : S ' 7!V ' , such that
ffl If there exist a function f : S 7! V such that F
accepts when given access to oracle F .
ffl If T accepts with probability at least 1
access to oracle F , then there exist a
such that the sequences F agree on at least
' positions, for at least a fraction of all possible
Specifically, the test examines the value of the function F on random pairs of sequences ((r
' of the i's, and checks that the corresponding values (on these r i 's and s i 's)
are indeed equal. For details see Section 4.
Unfortunately, this relatively simple consistency lemma does not suffice for the PCP appli-
cations. The reason being that, in that application, error reduction (see above) is done via
randomness-efficient procedures such as pairwise-independent sequences (since we cannot afford
to utilize ' \Delta log 2
jSj random bits as above). Consequently, the function F is not defined on the
entire set S ' but rather on a very sparse subset, denoted S. Thus, one is given access to a function
and is asked whether there exists a function f : S 7! V so that for most sequences
the sequences F agree on most (continious) subsequences
of length
'. The main result of this paper is
Lemma 1.2 (combinatorial consistency - sparse case): For every two of integers s; ' ? 1, there
exists a set S s;' ae [s] ' , where [s] so that the following holds:
1. For every ffi ? 0, there exists a constant c = poly(1=ffi) and a probabilistic oracle machine, T ,
which on input ('; s) runs for poly(' \Delta log s)-time and makes at most c queries to an oracle
, such that
ffl If there exist a function f : [s] 7! V such that F
accepts when given access to oracle F .
ffl If T accepts with probability at least 1
access to oracle F , then there exist a
such that for at least a fraction of all possible
the sequences F agree on at least a fraction of the
subsequences of length
'.
2. The individual elements in a uniformly selected sequence in S s;' are uniformly distributed in
[s] and are pairwise independent. Furthermore, the set S s;' has cardinality poly(s) and can
be constructed in time poly(s; ').
Specifically, the test examines the value of the function F on related random pairs of sequences
These sequences are viewed as
' \Theta
' matrices, and, loosely
speaking, they are chosen to be random extensions of the same random row (or column). For
details see Section 2.
In particular, the presentation in Section 2 axiomatizes properties of the set of sequences, S s;' ,
for which the above tester works. Thus, we provide a "parallel repetition theorem" which holds for
random but non-independent instances (rather than for independent random instances as in other
such results). However, our "parallel repetition theorem" applies only to the case where a single
query is asked in the basic system (rather than a pair of related queries as in other results). Due to
this limitation, we could not apply our "parallel repetition theorem" directly to the error-reduction
of generic proof systems. Instead, as explained below, we applied our "parallel repetition theorem"
to derive a relatively strong low-degree test from a weaker low-degree test.
We believe that the combinatorial consistency lemma of Section 2 may play a role in subsequent
developments in the area.
1.2 Application to the PCP Characterization Theorem
The currently known proof of the PCP Characterization Theorem [ALMSS] composes proof systems
in which the verifier makes a constant number of multi-valued queries. Such verifiers are constructed
by "parallelization" of simpler verifiers, and thus the problem of "consistency" arises. This problem
is solved by use of low-degree multi-variant polynomials, which in turn requires "high-quality" low-degree
testers. Specifically, given a function f : GF(p) n 7! GF(p), where p is prime, one needs to
test if f is close to some low-degree polynomial (in n variables over the finite field GF(p)). It is
required that any function f which disagrees with every d-degree polynomial on at least (say) 1%
of the inputs be rejected with (say) probability 99%. The test is allowed to use auxiliary proof
oracles (in addition to f) but it may only make a constant number of queries and the answers must
have length bounded by poly(n; d; log p). Using a technical lemma due to Arora and Safra [AS],
Arora et. al. [ALMSS] proved such a result. 1 The full proof is quite complex and is algebraic in
nature. A weaker result due to Gemmel et. al. [GLRSW] (see [RS96]) asserts the existence of a
d-degree test which, using d+2 queries, rejects such bad functions with probability at
Their proof is much simpler. Combining the result of Gemmel et. al. [GLRSW, RS96] with our
combinatorial consistency lemma (i.e., Lemma 1.2), we obtain an alternative proof of the following
result
Lemma 1.3 (low-degree tester): For every ffi ? 0, there exists a constant c and a probabilistic
oracle machine, T , which on input n; p; d runs for poly(n; d; log p)-time and makes at most c queries
to both f and to an auxiliary oracle F , such that
ffl If f is a degree-d polynomial, then there exist a function F so that T always accepts.
ffl If T accepts with probability at least 1
access to the oracles f and F , then f
agrees with some degree-d polynomial on at least a 1
fraction of the domain. 2
We stress that in contrast to [ALMSS] our proof of the above lemma is mainly combinatorial. Our
only reference to algebra is in relying on the result of Gemmel et. al. [GLRSW, RS96] (which is
An improved analysis was later obtained by Friedl and Sudan [FS].
Actually, [ALMSS] only prove agreement on an (arbitrary large) constant fraction of the domain.
weaker and has a simpler proof than that of [ALMSS]). Our tester works by performing many (pair-
wise independent) instances of the [GLRSW] test in parallel, and by guaranteeing the consistency
of the answers obtained in these tests via our combinatorial consistency test (i.e., of Lemma 1.2).
In contrast, prior to our work, the only way to guarantee the consistency of these answers resulted
in the need to perform a low-degree test of the type asserted in Lemma 1.3 (and using [ALMSS],
which was the only alternative known, this meant losing the advantage of utilizing a low-degree
tests with a simpler algebraic analysis).
1.3 Related work
We refrain from an attempt to provide an account of the developments which have culminated in
the PCP Characterization Theorem. Works which should certainly be mentioned include [GMR,
BGKW, FRS, LFKN, S90, BFL, BFLS, FGLSS, AS, ALMSS] as well as [BF, BLR, LS, RS92]. For
detailed accounts see surveys by Babai [B94] and Goldreich [G96].
Hastad's recent work [H96] contains a combinatorial consistency lemma which is related to our
Lemma 1.1 (i.e., the "simple case" lemma). However, Hastad's lemma refers to the case where the
test accepts with very low probability and so its conclusion is weaker (though harder to establish).
Raz and Safra [RaSa] claim to have been inspired by our Lemma 1.2 (i.e., the "sparse case" lemma).
Organization
The (basic) "sparse case" consistency lemma is presented in Section 2. The application to the
PCP Characterization Theorem is presented in Section 3. Section 4 contains a proof of Lemma 1.1
(which refers to sequences of totally independent random points).
Remark: This write-up reports work completed in the Spring of 1994, and announced at the
Weizmann Workshop on Randomness and Computation (January 1995).
2 The Consistency Lemma (for the sparse case)
In this section we present our main result; that is, a combinatorial consistency lemma which refers
to sequences of bounded independence. Specifically, we considered k 2 -long sequences viewed as
k-by-k matrices. To emphasize the combinatorial nature of our lemma and its proof, we adopt
an abstract presentation in which the properties required from the set of matrices are explicitly
stated (as axioms). We comment that the set of all k-by-k matrices over S satisfies these axioms.
A more important case is given in Construction 2.3: It is based on a standard construction of
pairwise-independent sequences (i.e., the matrix is a pairwise-independent sequence of rows, where
each row is a pairwise-independent sequence of elements).
General Notation. For a positive integer k, let [k] kg. For a finite set A, the notation
a 2R A means that a is uniformly selected in A. In case A is a multiset, each element is selected
with probability proportional to its multiplicity.
2.1 The Setting
Let S be some finite set, and let k be an integer. Both S and k are parameters, yet they will be
implicit in all subsequent notations.
Rows and Columns. Let R be a multi-set of sequences of length k over S so that every e 2 S
appears in some sequence of R. For sake of simplicity, think of R as being a set (i.e., each sequence
appears with multiplicity 1). Similarly, let C be another set of sequences (of length k over S). We
neither assume We consider matrices having rows in R and columns in C
(thus, we call the members of R row-sequences, and those in C column-sequences). We denote by
M a multi-set of k-by-k matrices with rows in R and columns in C. Namely,
Axiom 1 For every th row of m is an element of R and the i th column
of m is an element of C.
For every i 2 [k] and -
r 2 R, we denote by M i (-r) the set of matrices (in M) having - r as the i th
row. Similarly, for j 2 [k] and - c 2 C, we denote by M j (-c) the set of matrices (in M) having - c
as the j th column. For every -
we denote by M j
-c) the set of matrices having -
r as the i th row and - c as the j th column (i.e.,
Shifts. We assume that R is "closed" under the shift operator. Namely,
Axiom 2 For every - there exists a unique -
We denote this right-shifted sequence by oe(-r). Similarly, we assume
that there exists a unique - 1. We denote
this left-shifted sequence by oe \Gamma1 (-r). Furthermore 3 , we assume that shifting each of the rows of a
to the same direction, yields a matrix m 0 that is also in M.
We stress that we do not assume that C is "closed" under shifts (in an analogous manner). For
every (positive) integer i, the notations oe i (-r) and oe \Gammai (-r) are defined in the natural way.
Distribution. We now turn to axioms concerning the distribution of rows and columns in a
uniformly chosen matrix. We assume that the rows (and columns) of a uniformly chosen matrix
are uniformly distributed in R (and C, respectively). 4 In addition, we assume that the rows (but
not necessarily the columns) are also pairwise independent. Specifically,
Axiom 3 Let m be uniformly selected in M. Then,
1. For every i 2 [k], the i th column of m is uniformly distributed in C.
2. For every i 2 [k], the i th row of m is uniformly distributed in R.
3. Furthermore, for every j 6= i and - r 2 R, conditioned that the i th row of m equals -
r, the j th
row of m is uniformly distributed over R.
Finally, we assume that the columns in a uniformly chosen matrix containing a specific row-sequence
are distributed identically to uniformly selected columns with the corresponding entry. A formal
statement is indeed in place.
Axiom 4 For every th column in a matrix that is uniformly
selected among those having -
r as its i th row (i.e., m 2R M i (-r)), is uniformly distributed among the
column-sequences that have r j as their i th element.
3 The extra axiom is not really necessary; see remark following the definition of the consistency test.
4 This, in fact, implies Axiom 1.
Clearly, if the j th element of -
differs from the i th element of -
by the above axiom, M j
-c) is not empty. Further-
more, the above axiom implies that (in case r
denotes the set of column-sequences having e as their i th element. (The second equality
is obtained by Axiom 4.)
2.2 The Test
\Gamma be a function assigning matrices in M (which may be a proper subset of all possible k-by-k
matrices over S) values which are k-by-k matrices over some set of values V (i.e.,
The function \Gamma is supposed to be "consistent" (i.e., assign each element, e, of S the same value,
independently of the matrix in which e appears). The purpose of the following test is to check that
this property holds in some approximate sense.
Construction 2.1 (Consistency Test):
1. column test: Select a column-sequence - c uniformly in C, and Select two random
extensions of this column, namely test if the i th column
of \Gamma(m 1 ) equals the j th column of \Gamma(m 2 ).
2. row test (analogous to the column test): Select a row-sequence -
r uniformly in R, and
[k]. Select two random extensions of this row, namely
test if the i th row of \Gamma(m 1 ) equals the j th row of \Gamma(m 2 ).
3. shift test: Select a matrix m uniformly in M and an integer t 2 [k \Gamma 1]. Let m 0 be the matrix
obtained from m by shifting each row by t; namely, the i th row of m 0 is oe t (-r), where -
r denotes
the i th row of m. We test if the first columns of \Gamma(m) match the last columns of
\Gamma(m
The test accepts if all three (sub-)tests succeed.
Remark: Actually, it suffices to use a seemingly weaker test in which the row-test and shift-test
are combined into the following generalized row-test:
Select a row-sequence - r uniformly in R, integers 1g.
Select a random extension of this row and its shift, namely
test if the t)-long suffix of the i th row of \Gamma(m 1 ) equals the
prefix of the j th row of \Gamma(m 2 ).
Our main result asserts that Construction 2.1 is a "good consistency test": Not only that almost
all entries in almost all matrices are assigned in a consistent manner (which would have been
obvious), but all entries in almost all rows of almost all matrices are assigned in a consistent
manner.
Lemma 2.2 Suppose M satisfies Axioms 1-4. Then, for every constant ffi ? 0, there exist a
the consistency test with probability at
least there exists a function - : S 7! V so that, with probability at least 1 \Gamma ffi, the value
assigned by \Gamma to a uniformly chosen matrix matches the values assigned by - to the elements of a
uniformly chosen row in this matrix. Namely,
[k]. The constant ffl does not depend on k and S. Furthermore, it is
polynomially related to ffi .
As a corollary, we get Part (1) of Lemma 1.2. Part (2) follows from Proposition 2.4 (below).
2.3 Proof of Lemma 2.2
As a motivation towards the proof of Lemma 2.2, consider the following mental experiment. Let
be an arbitrary matrix and e be its (i; th entry. First, uniformly select a random matrix,
denoted containing the i th row of m. Next, uniformly select a random matrix, denoted m 2 ,
containing the j th column of m 1 . The claim is that m 2 is uniformly distributed among the matrices
containing the element e. Thus, if \Gamma passes items (1) and (2) in the consistency test then it must
assign consistent values to almost all elements in almost all matrices. Yet, this falls short of even
proving that there exists an assignment which matches all values assigned to the elements of some
row in some matrix. Indeed, consider a function \Gamma which assigns 0 to all elements in the first fflk
columns of each matrix and 1's to all other elements. Clearly, \Gamma passes the row-test with probability
1 and the column-test with probability greater than there is no - so that for a
random matrix the values assigned by \Gamma to some row match - . It is easy to see that the shift-test
takes care of this special counter-example. Furthermore, it may be telling to see what is wrong with
some naive arguments. A main issue these arguments tend to ignore is that for an "adversarial"
choice of \Gamma and a candidate choice of - : S 7! V , we have no handle on the (column) location of
the elements in a random matrix on which - disagrees with \Gamma. The shift-test plays a central role
in getting around this problem; see subsection 2.3.2 and Claim 2.2.14 (below).
Recommendation: The reader may want to skip the proofs of all claims in first reading. We
believe that all the claims are quite believable, and that their proofs (though slightly tedious in
some cases) are quite straightforward. In contrast, we believe that the ideas underlying the proof of
the lemma are to be found in its high level structure; namely, the definitions and the claims made.
Notation: The following notation will be used extensively throughout the proof. For a k-by-k
matrix, m, we denote by row i (m) the i th row of m and by col j (m) the j th column of m. Restating
the conditions of the lemma, we have (from the hypothesis that \Gamma passes the column test)
are uniformly selected in the corresponding sets (i.e., - c2C,
(-c)). Similarly, from the hypothesis that \Gamma passes the row test, we have
It will be convenient to extend the shift
notation to matrices in the obvious manner; namely, oe t (m) is defined as the matrix m 0 satisfying
row From the hypothesis that \Gamma passes the shift-test, we
obtain
Finally, denoting by entry i;j (m) the (i; th entry in the matrix
m, we restate the conclusion of the lemma as follows
Prob i;m (9j so that entry i;j (\Gamma(m)) 6= -(entry i;j
2.3.1 Stable Rows and Columns - Part 1
For each - r 2 R and -
we denote by p - r (-ff) the probability that \Gamma assigns to the row-sequence
r the value-sequence -
namely,
implies that for almost all row-sequences there is a
"typical" sequence of values; see Claim 2.2.3 (below).
Definition 2.2.1 (consensus): The consensus of a row-sequence -
denoted con(-r), is defined
as the value -
ff for which p -
r (-ff) is maximum. Namely,
ff is the (lexicographically first)
value-sequence for which p -
fi)g.
Definition 2.2.2 (stable sequences): Let ffl 2
ffl. We say that the row-sequence -
r is stable if
Otherwise, we say that - r is unstable.
Clearly, almost all row-sequences are stable. That is,
2.2.3 All but at most an ffl 2 fraction of the row-sequence are stable.
proof: For each fixed -
r we have
ff
where (-r). Taking the expectation over -
r 2R R, and using
Eq. (2), we get
r (p max
r
(-ff)g. Using Markov Inequality, we get
and the claim follows. 2
By definition, almost all matrices containing a particular stable row-sequence assign this row-
sequence the same sequence of values (i.e., its consensus value). We say that such matrices are
conforming for this row-sequence.
Definition 2.2.4 (conforming called i-conforming
(or conforming for row-position i) if \Gamma assigns the i th row of m its consensus value; namely, if
row Otherwise, the matrix is called i-non-conforming (or non-conforming
for row-position i).
2.2.5 The probability that for a uniformly chosen i 2 [k] and m 2 M, the matrix m is
i-non-conforming is at most ffl 3
. Furthermore, the bound holds also if we require that the i th
row of m is stable.
proof: The stronger bound (on probability) equals the sum of the probabilities of the following
two events. The first event is that the i th row of the matrix is unstable; whereas the second event
is that the i th row of the matrix is stable and yet the matrix is i-non-conforming. To bound the
probability of the first event (by ffl 2 ), we fix any i 2 [k] and combine Axiom 3 with Claim 2.2.3. To
bound the probability of the second event, we fix any stable - r and use the definition of a stable
row. 2
Remark: Clearly, an analogous treatment can be applied to column-sequences. In the sequel, we
freely refer to the above notions and to the above claims also when discussing column-sequences.
2.3.2 Stable Rows - Part 2 (Shifts)
Now we consider the relation between the consensus of row-sequences and the consensus of their
shifts. By a short shift of the row-sequence - r, we mean any row-sequence - obtained
with 1)g. Our aim is to show that the consensus (as well as stability) is
usually preserved under short shifts.
Definition 2.2.6 (very-stable row): Let ffl
We say that a row-sequence -
r is very stable if it
is stable, and for all but an ffl 4 fraction of d 2 1)g, the row-sequence -
is also stable.
Clearly,
2.2.7 All but at most an ffl 4 fraction of the row-sequence are very-stable.
proof: By a simple counting argument. 2
Definition 2.2.8 (super-stable row): Let ffl
We say that a row-sequence
r is super-stable if it is very-stable, and, for every j 2 [k], the following holds
for all but an ffl 6 fraction of the t 2 [k], the row-sequence - s stable and
con is the j th element of con(-r).
Note that the t th element of oe t\Gammaj (-r) is r . Thus, a row-sequence is super-stable if the
consensus value of each of its elements is preserved under almost all (short) shifts.
2.2.9 All but at most an ffl 6 fraction of the row-sequence are super-stable.
proof: We start by proving that almost all row-sequences and almost all their shifts have approximately
matching statistics, where the statistics vector of -
r 2 R is defined as the k-long sequence
(of
- r (\Delta), so that p j
r (v) is the probability that \Gamma assigns the value v to the j th
element of the row - r. Namely,
(-r). By the definition of consensus, we know that for every stable
row-sequence - r 2 R, we have
its shift
are stable and have approximately matching statistics (i.e., the corresponding
statistics sub-vectors are close) then their consensus must match (i.e., the corresponding
subsequences of the consensus are equal).
subclaim 2.2.9.1: For all but an ffl 5 fraction of the row-sequences -
r, all but an ffl 5 fraction of the shifts
proof of subclaim: Let pref row i;j (m) denote the j-long prefix of row i (m) and suff row i;j (m) its j-long
suffix. By the shift-test (see Eq. (3) and
Prob m;i;d (pref row i;k\Gammad (\Gamma(m)) =suff row i;k\Gammad (\Gamma(m 0 Using Axiom 3 (Part 2) and an averaging
argument, we get that all but a ffl 5 fraction of the - r 2 R, and all but a ffl 5 fraction of d 2 [k \Gamma 1],
Prob i;m (pref row i;k\Gammad (\Gamma(m)) =suff row i;k\Gammad (\Gamma(m 0
We fix such a pair - r and d, thus fixing also
A matrix-pairs (m; m 0 ) for which the equality holds contributes equally to the (appropriate
long portion of the) the statistic vectors of the row-sequences -
r and - s. The contribution of matrix-
pairs for which equality does not hold, to the difference
(v)j, is at most 2
per each relevant j. The subclaim follows. 3
As a corollary we get
2.2.9.2: Let us call a row-sequence, - r, infective if for every j 2 [k] all but an 2ffl 5 fraction
of the t 2 [k] satisfy
all but a 2ffl 5 fraction of the
row-sequences are infective.
proof of subclaim: The proof is obvious but yet confusing. We say that
r is fine1 if for all but an ffl 5
fraction of the d 2 [k] and for every d, we have
oe d (-r)
r is fine1
then for every j there are at most ffl 5 k positions t 2 fj+1; :::; kg so that
oe t\Gammaj (-r) (v)j ? 2ffl 5 .
Similarly, -
r is fine2 if for all but an ffl 5 fraction of the d 2 [k] and for every j ? d we have
oe \Gammad (-r)
(v)j - 2ffl 5 , and whenever - r is fine2 then for every j there are at most ffl 5 k
positions so that
oe \Gammaj+t (-r) (v)j ? 2ffl 5 . Thus, if a row-sequence -
r is
both fine1 and fine2 then for every j 2 [k] all but a 2ffl 1 fraction of the positions t 2 [k] satisfy
oe t\Gammaj (-r) (v)j - 2ffl 5 . By subclaim 2.2.9.1, we get that all but an ffl 5 fraction of the row-
sequences are fine1. A similar statement holds for fine2 (since the shift-test can be rewritten as
selecting setting Combining all these trivialities, the
Clearly, a row-sequence -
r that is both very-stable and infective satisfies, for every j 2 [k] and all
but at most ffl 4 of the t 2 [k], both
and in particular for
It follows that p t
must hold. Thus,
such an -
r is super-stable. Combining the lower bounds given by Claim 2.2.7 and subclaim 2.2.9.2,
the current claim follows (actually, we get a better bound; i.e., ffl 4
Summary
. Before proceeding let us summarize our state of knowledge. The key definitions
regarding row-sequences are of stable, very-stable and super-stable row-sequences (i.e., Defs 2.2.2,
2.2.6, and 2.2.8, respectively). Recall that a stable row-sequence is assigned the same value in
almost all matrices in which it appear. Furthermore, most prefixes (resp., suffices) of a super-stable
row-sequence are assigned the same values in almost all matrices containing these portions (as part
of some row). Regarding matrices, we defined a matrix to be i-conforming if it assigns its i th row
the corresponding consensus value (i.e., it conforms with the consensus of that row-sequence); cf.,
Definitions 2.2.4 and 2.2.1. We have seen that almost all row-sequences are super-stable and that
almost all matrices are conforming for most of their rows. Actually, we will use the latter fact with
respect to columns; that is, almost all matrices are conforming for most columns (cf., Claim 2.2.5
and the remark following it).
2.3.3 Deriving the Conclusion of the Lemma
We are now ready to derive the conclusion of the Lemma. Loosely speaking, we claim that the
function - , defined so that -(e) is the value most frequently assigned (by \Gamma) to e, satisfies Eq. (4).
Actually, we use a slightly different definition for the function - .
Definition 2.2.10 (the function - For a column-sequence - c, we denote by con i (-c) the values that
con(-c) assigns to the i th element in - c. We denote by C i (e) the set of column-sequences having e
as the i th component. Let q e (v) denote the probability that the consensus of a uniformly chosen
column-sequence, containing e, assigns to e the value v. Namely,
so that -(e)
with ties broken arbitrarily.
Assume, on the contrary to our claim, that Eq. (4) does not hold (for this - ). Namely, for a
uniformly chosen the following holds with probability greater that ffi
9j so that entry i;j (\Gamma(m)) 6= -(entry i;j (m)) (5)
The notion of a annoying row-sequence, defined below, plays a central role in our argument. Using
the above (contradiction) hypothesis, we first show that many row-sequences are annoying. Next,
we show that lower bounds on the number of annoying row-sequences translate to lower bounds on
the probability that a uniformly chosen matrix is non-conforming for a uniformly chosen column
position. This yields a contradiction to Claim 2.2.5.
Definition 2.2.11 (row-annoying is said to be
annoying for the row-sequence -
r if the j th element in con(-r) differs from -(r j ). A row-sequence - r is
said to be annoying if - r contains an element that is annoying for it.
Using Claim 2.2.9, we get
2.2.12 Suppose that Eq. (4) does not hold (for -). Then, at least a
fraction
of the row-sequences are both super-stable and annoying.
proof: Axiom 3 (part 2) is extensively used throughout this proof (with no explicit reference).
Combining Eq. (5) and Claim 2.2.9, with probability at least uniformly chosen
satisfies the following
1. there exists a j so that -(entry i;j (m)) is different from entry i;j (\Gamma(m));
2. row i (m) is super-stable;
3. matrix m is i-conforming; i.e., entry i;j (\Gamma(m)) equals con j (row i (m)), for every j 2 [k].
Combining conditions (1) and (3), we get that e = entry i;j (m) is annoying for the i th row of m.
The current claim follows. 2
A key observation is that each stable row-sequence which is annoying yields many matrices which
are non-conforming for the "annoying column position" (i.e., for the column position containing
the element which annoys this row-sequence). Namely,
2.2.13 Suppose that a row-sequence - stable and that r j is annoying for - r.
Then, at least a fraction of the matrices, containing the row-sequence -
r, are non-conforming
for column-position j.
We stress that the row-sequence -
r in the above claim is not necessarily very-stable (let alone super-
stable).
proof: Let us denote by v the value assigned to r j by the consensus of - r (i.e., v
r it follows that v is different from -(r j ). Consider the probability space defined
by uniformly selecting
r is stable it follows that in almost all of these
matrices the value assigned to r j by the matrix equals v. Namely,
(-r). By Axiom 4, the j th column of m is uniformly distributed in
thus we may replace - c 2R C i (r j ) by the j th column of m 2R M i (-r). Now, using the
definition of the function - and the accompanying notations, we get
(-r). The inequality holds since v 6= -(r j ) and by - 's definition
Combining Eq. (6) and (7), we get
and the claim follows. 2
Another key observation is that super-stable row-sequences which are annoying have the property of
"infecting" almost all their shifts with their annoying positions, and thus spreading the "annoyance"
over all column positions. Namely,
2.2.14 Suppose that a row-sequence - r is both super-stable and annoying. In particular,
suppose that the j th element of - annoying for -
r. Then, for all but at most an
ffl 6 fraction of the t 2 [k], the the row-sequence - stable and its t th element (which is
indeed r j ) is annoying for - s.
proof: Since - r is super-stable, we know that for all but an ffl 6 fraction of the t's, con j
and - s is stable (as well), where -
is annoying for -
r, we have
Combining Claims 2.2.12 and 2.2.14, we derive, for almost all positions t 2 [k], a lower bound for
the number of stable row-sequences that are annoyed by their t th element.
2.2.15 Suppose that Eq. (4) does not hold (for -). Then, there exists a set T ' [k] so that
and for every t 2 T there is a set of at least ffi 1
stable row-sequences so that
the t th position is annoying for each of these sequences.
proof: Combining Claims 2.2.12 and 2.2.14, we get that there is a set of super-stable row-
sequences A ' R so that A contains at least a ffi 1 fraction of R, and for every - r 2 A there
exist a j - r 2 [k] so that for all but a ffl 6 of the t 2 [k], the row-sequence -
is stable and
the t th position is annoying for it (i.e., for -
s). By a counting argument it follows that there is a
set T so that jT j - and for every t 2 T at least half of the - r's in A satisfy the above
(i.e., is stable and the t th position is annoying for - s). Fixing such a t 2 T , we consider
the set, denoted A t , containing these - r's; namely, for every -
the row-sequence - s
r (-r)
is stable and the t th position is annoying for it (i.e., for - s). Thus, we have established a mapping
from A t to a set of stable row-sequences which are annoyed by their t th position; specifically, -
r
is mapped to oe t\Gammaj - r (-r). Each row-sequence in the range of this mapping has at most k preimages
(corresponding to the k possible shifts which maintain its t th element). Recalling that A t contains
at least ffi 1\Delta jRj sequences, we conclude that the mapping's range must contain at least ffi 1
sequences, and the claim follows. 2
Combining Claims 2.2.15 and 2.2.13, we get a lower bound on the number of matrices which are
non-conforming for the j th column, 8j 2 T (where T is as in Claim 2.2.15). Namely,
be as guaranteed by Claim 2.2.15 and suppose that j 2 T . Then, at least a
1fraction of the matrices are non-conforming for column-position j.
proof: By Claim 2.2.15, there are at least ffi 1
stable row-sequences that are annoyed by
th position. Out of these row-sequences, we consider a subset, denoted A, containing exactly
row-sequences. By Claim 2.2.13, for each -
r 2 A, at least a fraction of the matrices
containing the row-sequence - r are non-conforming for column-position j. We claim that almost all
of these matrices do not contain another row-sequence in A (here we use the fact that A isn't too
large); this will allow us to add-up the matrices guaranteed by each - r 2 A without worrying about
multiple counting. Namely,
subclaim 2.2.16.1: For every - r 2 R
proof of subclaim: By Axiom 3 (part 3), we get that for every i 0 6= i the i 0 -th row of m 2R M i (-r) is
uniformly distributed in R. Thus, for every i 0
(-r). The subclaim follows. 3
Using the subclaim, we conclude that for each -
r 2 A, at least a fraction of
the matrices containing the row-sequence - r are non-conforming for column-position j and do not
contain any other row-sequence in A. The desired lower bound now follows. Namely, let B denote
the set of matrices which are non-conforming for column-position j, let B
denote the set of matrices in B i (-r) which do not contain any row in A except for the i th row;
then
The claim follows. 2
The combination of Claims 2.2.15 and 2.2.16, yields that a uniformly chosen matrix is non-conforming
for a uniformly chosen column position with probability at least (1 1. For
a suitable choice of constants (e.g., yields contradiction to Claim 2.2.5. Thus,
Eq. (4) must hold for - as defined in Def. 2.2.10, and the lemma follows.
2.4 A Construction that Satisfies the Axioms
Clearly, the set of all k-by-k matrices over S satisfies Axioms 1-4. A more interesting and useful
set of matrices is defined as follows.
Construction 2.3 (basic construction): We associate the set S with a finite field and suppose
Furthermore, [k] is associated with k elements of the field so that 1 is the multiplicative
unit and i 2 [k] is the sum of i such units. Let M be the set of matrices defined by four field
elements as follows. The matrix associated with the quadruple (x; has the (i; th entry
Remark: The column-sequences correspond to the standard pairwise-independent sequences fr
Similarly, the row-sequences are expressed as fr
Proposition 2.4 The Basic Construction satisfies Axioms 1-4.
proof: Axioms 1 is obvious from the above remark. The right-shift of the sequence fr+js
is To prove that Axiom 3 holds, we rewrite the i th
row as fs
are pairwise independent and uniformly distributed
in S \Theta S which corresponds to the set of row-sequences. It remains to prove that Axiom 4 holds.
We start by proving the following.
Fact 2.4.1: Consider any i; j 2 [k] and two sequences -
that r
proof of fact: By the construction, there exists a unique pair (a; b) 2 S \Theta S so that a
every (existence is obvious and uniqueness follows by considering any two equations; e.g.,
Similarly, there exist a unique pair (ff; fi) so that ff
every We get a system of four linear equations in x; x
This system has rank 3 and thus jSj solutions, each defining a matrix
in M j
Using Fact 2.4.1, Axiom 4 follows since
jS \Theta Sj
and so does the proposition.
3 A Stronger Consistency Test and the PCP Application
To prove Lemma 1.3, we need a slightly stronger consistency test than the one analyzed in
Lemma 2.2. This new test is given access to three related oracles, each supplying assignments
to certain classes of sequences over S, and is supposed to establish the consistency of these oracles
with one function - : S 7! V . Specifically, one oracle assigns values to k 2 -long sequences viewed as
two-dimensional arrays (as before). The other two oracles assign values to k 3 -long sequences viewed
as 3-dimensional arrays, whose slices (along a specific coordinate) correspond to the 2-dimensional
arrays of the first oracle. Using Lemma 2.2 (and the auxiliary oracles) we will present a test which
verifies that the first oracle is consistent in an even stronger sense than established in Lemma 2.2.
Namely, not only that all entries in almost all rows of almost all 2-dimensional arrays are
assigned in a consistent manner, but all entries in almost all 2-dimensional arrays are assigned
in a consistent manner.
3.1 The Setting
Let S, k, R, C and M be as in the previous section. We now consider a family, M c
, of k-by-k
matrices with entries is C. The family M c
will satisfy Axioms 1-4 of the previous section. In
addition, its induced multi-set of row-sequences, denoted R, will correspond to the multi-set M;
namely, each row of a matrix in M c
will form a matrix in M (i.e., the sequence of elements of C
corresponding to a row in a M c
-matrix will correspond to a M-matrix). Put formally,
Axiom 5 For every
and every i 2 [k], there exists so that for every j 2 [k],
the (i; th entry of m equals the j th column of m (i.e., entry i;j equivalently,
row m). Furthermore, this matrix m is unique.
Analogously, we consider also a family, M r
, of k-by-k matrices the entries of which are elements in
R so that the rows 5 of each
correspond to matrices in M.
3.2 The Test
As before, \Gamma is a function assigning (k-by-k) matrices in M values which are k-by-k matrices over
some set of values V (i.e.,
) be (the supossedly corresponding)
function assigning k-by-k matrices over C (resp., R) values which are k-by-k matrices over V
Construction 3.1 (Extended Consistency Test):
1. consistency for sequences: Apply the consistency test of Construction 2.1 to \Gamma c
. Same for \Gamma r
2. correspondence test: Uniformly select a matrix
and a row i 2 [k], and compare the i th
row in \Gamma c
(m) to \Gamma(m), where is the matrix formed by the C-elements in the i th row
of m. Same for \Gamma r
The test accepts if both (sub-)tests succeed.
Lemma 3.2 Suppose M;M c
satisfy Axioms 1-5. Then, for every constant fl ? 0, there exist
a constant ffl so that if a function (together with some functions \Gamma c
passes the extended consistency test with probability at least 1 there
exists a function - : S 7! V so that, with probability at least 1 \Gamma fl, the value assigned by \Gamma to a
uniformly chosen matrix matches the values assigned by - to each of the elements of m.
Namely,
Probm
M. The constant ffl does not depend on k and S. Furthermore, it is polynomially
related to fl.
The proof of the lemma starts by applying Lemma 2.2 to derive assignments to C (resp., R) which
are consistent with \Gamma c
on almost all rows of almost all k 3 -dimensional arrays (ie., M c
and M r
, respectively). It proceeds by applying a degenerate argument of the kind applied in the
proof of Lemma 2.2. Again, the reader may want to skip the proofs of all claims in first reading.
3.3 Proof of Lemma 3.2
We start by considering item (1) in the Extended Consistency Test. By Lemma 2.2, there exists
a function - c
so that the value assigned by \Gamma c
), to a
uniformly chosen row in a uniformly chosen matrix M c
matches with high probability
the values assigned by - c
) to each of the C-elements (resp., R-elements) appearing in this
5 Alternatively, one can consider a family, M r
, of k-by-k matrices the entries of which are elements in R so that the
columns of each
correspond to matrices in M. However, this would require to modify the basic consistency
test (of Construction 2.1), for these matrices, so that it shifts columns instead of rows.
row. Here "with high probability" means with probability at least 1 \Gamma ffi, where ffi ? 0 is a constant,
related to ffl as specified by Lemma 2.2. Namely,
(entry i;j
3.3.1 Perfect Matrices and Typical Sequences
Eq.
Our next step is to relate - c
to \Gamma. This is done
easily by referring to item (2) in the Extended Consistency Test. Specifically, it follows that the
value assigned by \Gamma, to a uniformly chosen matrix m 2 M, matches, with high probability, the
values assigned by - c
) to each of the columns (resp., rows) of m. That is
Definition 3.2.1 (perfect matrices): A matrix m 2 M is called perfect (for columns) if for every
th column of \Gamma(m) equals the value assigned by - c
to the j th column of m
(i.e.,
called perfect (for rows) if row i
(row i (m)), for every i 2 [k].
(perfect
(c) All but a ffi 1 fraction of the matrices in M are perfect for columns.
(r) All but a ffi 1 fraction of the matrices in M are perfect for rows.
proof: By the Correspondence (sub)Test, with probability at least 1 \Gamma ffl, a uniformly chosen row
in a uniformly chosen
is "given" the same values by \Gamma c
and by \Gamma (i.e., row i (\Gamma c
for On the other hand, by Eq. (8), with probability at least 1 \Gamma ffi,
a uniformly chosen row in a uniformly chosen
is "given" the same values by \Gamma c
and by
(i.e., entry i;j (\Gamma c
(entry i;j (m)), for i 2R [k] and all j 2 [k]). Thus, with probability
at least 1 uniformly chosen row in a uniformly chosen
is "given" the same
values by \Gamma and by - c
(i.e.,
(entry i;j (m)), for i 2R [k] and all j 2 [k]). Using
regarding M c
) and the "furthermore" part of Axiom 5, we get part (c) of the
claim (i.e., col j
similar argument holds for part (r). 2
A perfect (for columns) matrix "forces" all its columns to satisfy some property \Pi (specifically, the
value assigned by - c
to its column-sequences must match the value \Gamma of the matrix). Recall that
we have just shown that almost all matrices are perfect and thus force all their columns to satisfy
some property \Pi. Using a counting argument, one can show that all but at most a 1
fraction of
the column-sequences must satisfy \Pi in almost all matrices in which they appear. Namely,
Definition 3.2.3 (typical sequences): Let
We say that the column-sequence - c (resp.,
row-sequence - r) is typical if
(-c). Otherwise, we say that - c is non-typical.
3.2.4 All but at most an
fraction of the column-sequence (resp., row-sequences) are
typical.
We will only use the bound for the fraction of typical row-sequences.
proof: We mimic part of the counting argument of Claim 2.2.16. Let N be a set of non-typical
row-sequences, containing exactly
sequences. Fix any - r 2 N and consider the set of matrices
containing - r. By Axiom 3 (part 3 - regarding M), at most a ffi 2fraction of these matrices contain
some other row in N . On the other hand, by definition (of non-typical row-sequence), at least a
fraction of the matrices containing - r, have \Gamma disagree with - r
(-r) on -
r, and thus are non-perfect (for
rows). It follows that at least a ffi 2fraction of the matrices containing - r are non-perfect (for rows)
and contain no other row in N . Combining the bounds obtained for all -
r 2 N , we get that at least
a 2fraction of the matrices are not perfect (for rows). This contradicts Claim 3.2.2(r), and so
the current claim follows (for row-sequences and similarly for column-sequences). 2
3.3.2 Deriving the Conclusion of the Lemma
We are now ready to derive the conclusion of the Lemma. Loosely speaking, we claim that the
function - , defined so that -(e) is the value most frequently assigned by - c
to e, satisfies the claim
of the lemma.
Definition 3.2.5 (the function -
(-c) i denote the value assigned by - c
to the i th element of
denotes the set of column-sequences having e as
the i th component). We consider - so that -(e) ties
broken arbitrarily.
The proof that - satisfies the claim of Lemma 3.2 is a simplified version of the proof of Lemma 2.2. 6
We assume, on the contrary to our claim, that, for a uniformly chosen
Probm
so that entry i;j (\Gamma(m)) 6= -(entry i;j (m))
As in the proof of Lemma 2.2, we define a notion of an annoying row-sequence. Using the above
(contradiction) hypothesis, we first show that many row-sequences are annoying. Next, we show
that lower bounds on the number of annoying row-sequences translate to lower bounds on the
probability that a uniformly chosen matrix is non-perfect (for columns). This yields a contradiction
to Claim 3.2.2(c).
Definition 3.2.6 (a new definition of annoying rows): A row-sequence -
is said to be
annoying if there exists a j 2 [k] so that the j th element in - r
(-r) differs from -(r j ).
Using Claim 3.2.2(r), we get
3.2.7 Suppose that Eq. (9) hold and let
. Then, at least a
k fraction of the
row-sequences are annoying.
6 The reader may wonder how it is possible that a simpler proof yields a stronger result; as the claim concerning
the current - is stronger. The answer is that the current - is defined based on a more restricted function over C and
there are also stronger restrictions on \Gamma. Both restrictions are due to facts that we have inferred using Lemma 2.2
proof: Combining Eq. (9) and Claim 3.2.2(r), we get that with probability at least
a uniformly chosen matrix is perfect for rows and contains some entry, denoted (i; j), for
which the \Gamma value is different from the - value (i.e., entry i;j (\Gamma(m)) 6= -(entry i;j (m))). Since the
-value of all rows of m matches the \Gamma value, it follows that the i th row of m is annoying. Thus,
at least a fl 1 fraction of the matrices contain an annoying row-sequence. Using Axiom 3 (part 2 -
regarding M), we conclude that the fraction of annoying row-sequences must be as claimed. 2
A key observation is that each row-sequence that is both typical and annoying yields many matrices
which are non-perfect for columns. Namely,
Suppose that a row-sequence -
r is both typical and annoying. Then, at least a
fraction of the matrices, containing the row-sequence - r, are non-perfect for columns.
is annoying, there exists a j 2 [k] so that the the j th component of
(-r) (which is the value assigned to r j ) is different from -(r j ). Let us denote by v the value - r
assigns to r j . Note that v 6= -(r j ). Consider the probability space defined by uniformly selecting
r is typical it follows that in almost all of these matrices the value
assigned to r j by the \Gamma equals v; namely,
By Axiom 4 (regarding M), the j th column of m is uniformly distributed in C i (r j ). Now, using
the definition of the function - and the accompanying notations, we get
The inequality holds since v 6= -(r j ) and by - 's definition q r j
Combining Eq. (10)
and (11), we get
Prob i;m (entry i;j (\Gamma(m)) 6=- c
and the claim follows. 2
Combining Claims 3.2.7, 3.2.4 and 3.2.8, we get a lower bound on the number of matrices which
are non-perfect for columns. Namely,
Suppose that Eq. (9) hold and let
2. Then, at least a fl 2fraction of the
matrices are non-perfect for columns.
proof: By Claims 3.2.7 and 3.2.4, at least a
fraction of the row-sequences are
both annoying and typical. Let us consider a set of exactly
\Delta jRj such row-sequences, denoted
A. Mimicking again the counting argument part of Claim 2.2.16, we bound, for each -
r 2 A, the
fraction of non-perfect (for columns) matrices which contain - r but no other row-sequence in A.
Using an adequate setting of ffi 2 and fl 2 , this fraction is at least 1. Summing the bounds achieved
for all -
r 2 A, the claim follows. 2
Using a suitable choice of fl (as a function of ffl), Claim 3.2.9 contradicts Claim 3.2.2(c), and so
Eq. (9) can not hold. The lemma follows.
3.4 Application to Low-Degree Testing
Again, the set of all k-by-k-by-k arrays over S satisfies Axioms 1-5. A more useful set of 3-
dimensional arrays is defined as follows.
Construction 3.3 (main construction): Let M be as in the Basic Construction (i.e., Construction
2.3). We let M c
be the set of matrices defined by applying the Basic Construction
to the element-set Specifically, a matrix in M c
is defined by the quadruple (x;
where each of the four elements is a pair over S, so that the (i; th entry in the matrix equals
are viewed as two-dimensional vectors over the finite field S
and are scalars in S. The (i; th entry is a pair over S which represents a pairwise independent
sequence (which equals an element in
Clearly,
3.4 Construction 3.3 satisfies Assuptions 1-5.
Combining all the above with the low-degree test of [GLRSW, RS96] using the results claimed
there 7 , we get a low-degree test which is sufficiently efficient to be used in the proof of the PCP-
Characterization of NP.
Construction 3.5 (Low Degree Test): Let f : F n 7! F , where F is a field of prime cardinality,
and d be an integer so that jF j ? 4(d
and M r
be as in Construction 3.3, with
be
auxiliary tables (which should contain the corresponding f-values). The low degree test consists of
1. Applying the Extended Consistency Test
7!
2. Selecting uniformly a matrix m 2 M and testing that the Polynomial Interpolation Condition
(cf., [GLRSW]) holds for each row; namely, we test that
for all
3. Select uniformly a matrix in M and test matching of random entry to f . Namely, select
uniformly check if entry i;j
The test accepts if and only if all the above three sub-tests accept.
Proposition 3.6 Let f : F n 7! F , where F is a field, and let ' j. Then, the Low
Degree Test of Construction 3.5 requires O(') randomness and query length, poly(') answer length
and satisfies:
completeness: If f is a degree-d polynomial, then there exist
and
so that the test always accepts.
7 Rather than using much stronger results obtained via a more complicated analysis, as in [ALMSS], which rely
on the Lemma of [AS].
soundness: For every there exists an ffl ? 0 so that for every f which is at distance
at least ffi from any degree-d polynomial and for every
and
, the test rejects with probability at least ffl. Furthermore, the constant ffl is a
polynomial in ffi which does not depend on n; d and F .
As a corollary, we get Lemma 1.3.
proof: As usual, the completeness clause is easy to establish. We thus turn to the soundness
requirement. By Claim 3.4, we may apply Lemma 3.2 to the first sub-test and infer that either the
first sub-test fails with some constant probability (say ffl 1 ) or there exists a function - : F n 7! F so
that with very high constant probability (say
entry i;j
holds for all On the other hand, by [GLRSW] (see also [S95, Thm 3.3] and
[RS96, Thm 5]), either
or - is very close (specifically at distance at most 1=(d polynomial. A key
observation is that the Main Construction (i.e., Construction 3.3) has the property that rows in
are distributed identically to the distribution in Eq. (13). Thus, for every j 2 [k] either
or - is at distance at most ffi 2
some degree-d polynomial. However, we claim
that in case Eq. (14) holds, the second sub-test will reject with constant probability. The claim
is proven by first considering copies of the GLRSW Test (i.e., the test in Eq. (14)).
Using Chebishev's Inequality and the hypothesis by which each copy rejects with probability at
least 1=2(d we conclude that the probability that none of these copies rejects is bounded
above by 2(d+2) 2
1. Thus, the second sub-test must reject with probability at least ffl 2
accounts for the substitution of the - values by the entries in \Gamma(\Delta). We conclude that -
must be -close to a degree-d polynomial or else the test rejects with too high probability (i.e.,
Finally, we claim that if f disagrees with - on of the inputs then the third sub-test
rejects with probability at least ffl 3
the distance from f to - is bounded by the sum
of the distances of f to the matrix and of - to the matrix). The proposition follows using some
arithmetics: Specifically, we set Lemma 3.2), and verify
that
4 Proof of Lemma 1.1
There should be an easier and direct way of proving Lemma 1.1. However, having proven Lemma 2.2,
we can apply it 8 to derive a short proof of Lemma 1.1. To this end we view '-multisets over S
8 This is indeed an over-kill. For example, we can avoid all complications regarding shifts (in the proof of
Lemma 2.2).
as k-by-k matrices, where
'. Recall that the resulting set of matrices satisfies Axioms 1-4.
Thus, by Lemma 2.2, in case the test accepts with probability at least 1 \Gamma ffl, there exists a function
such that
is the set of all k-multisets over S and E l (A) is the set of all l-multisets extending A. We
can think of this probability space as first selecting B 2R S k 2
and next selecting a k-subset A in
B. Thus,
where denotes the set of all k-multisets contained in B. This implies
as otherwise Eq. (15) is violated. (The probability that a random k-subset hits a subset of densityk
is at least 1.) The lemma follows.
A previous version of this paper [GS96] has stated a stronger version of Lemma 1.1, where the
sequences F are claimed to be identical (rather than different on
at most k locations), for a fraction of all possible Unfortunately, the proof
given there was not correct - a mistake in the concluding lines of the proof of Claim 4.2.9 was
found by Madhu Sudan. Still we conjecture that the stronger version holds as well, and that it can
be established by a test which examines two random 1)-extensions of a random k-subset.
Acknowledgment
We are grateful to Madhu Sudan for pointing out an error in an earlier version, and for other helpful
comments.
--R
Proof Verification and Intractability of Approximation Problems.
Probabilistic Checkable Proofs: A New Characterization of NP.
Transparent Proofs and Limits to Approximation.
Checking Computations in Polylogarithmic Time.
Hiding Instances in Multioracle Queries.
Free Bits
Efficient Probabilistically Checkable Proofs and Applications to Approximation.
Improved Non-Approximability Results
Approximating Clique is almost NP-complete
On the Power of Multi-Prover Interactive Pro- tocols
Some Improvement to Total Degree Tests.
A Taxonomy of Proof Systems.
Proofs that Yield Nothing but their Validity or All Languages in NP Have Zero-Knowledge Proof Systems
A Combinatorial Consistency Lemma with application to proving the PCP Theorem.
The Knowledge Complexity of Interactive Proof Systems.
Clique is Hard to Approximate within n 1
Fully Parallelized Multi Prover Protocols for NEXP-time
Algebraic Methods for Interactive Proof Systems.
Testing Polynomial Functions Efficiently and over Rational Domains.
Robust Characterization of Polynomials with Application to Program Testing.
Efficient Checking of Polynomials and Proofs and the Hardness of Approximation Problems.
--TR
--CTR
Eli Ben-Sasson , Oded Goldreich , Prahladh Harsha , Madhu Sudan , Salil Vadhan, Robust pcps of proximity, shorter pcps and applications to coding, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Eli Ben-Sasson , Madhu Sudan, Robust locally testable codes and products of codes, Random Structures & Algorithms, v.28 n.4, p.387-402, July 2006 | parallelization of probabilistic proof systems;probabilistically checkable proofs PCP;low-degree tests |
345777 | Reducibility and Completeness in Private Computations. | We define the notions of reducibility and completeness in (two-party and multiparty) private computations. Let g be an n-argument function. We say that a function f is reducible to a function g if n honest-but-curious players can compute the function f n-privately, given a black box for g (for which they secretly give inputs and get the result of operating g on these inputs). We say that g is complete (for private computations) if every function f is reducible to g.In this paper, we characterize the complete boolean functions: we show that a boolean function g is complete if and only if g itself cannot be computed n-privately (when there is no black box available). Namely, for n-argument boolean functions, the notions of completeness and n-privacy are complementary. This characterization provides a huge collection of complete functions any nonprivate boolean function!) compared to very few examples that were given (implicitly) in previous work. On the other hand, for nonboolean functions, we show that these two notions are not complementary. | Introduction
We consider (two party and multi-party) private computations. Quite informally, given an arbitrary n-
argument function f , a t-private protocol should allow n players, each possessing an individual secret input,
to satisfy simultaneously the following two constraints: (1) (Correctness): all players learn the value of
f and (2) (Privacy): no set of at most t (faulty) players learns more about the initial inputs of other
players than is implicitly revealed by f 's output. This problem, also known as secure computation, have
been examined in the literature with two substantially different types of faulty players - malicious (i.e.
Byzantine) players and honest-but-curious players. Below we discuss some known results with respect to
each of these two types of players.
Secure computation for malicious players. Malicious players may deviate from the prescribed
protocol in an arbitrary manner, in order to violate the correctness and privacy constraints. The first
This paper is based on (but not completely covers) two conference papers; a 1991 paper by Kilian [K-91] and a 1994 paper
by Kushilevitz, Micali, and Ostrovsky [KMO-94].
y NEC Research Institute, New Jersey. E-mail: joe@research.nj.nec.com .
z Department of Computer Science, Technion. Research supported by the E. and J. Bishop Research Fund and by
the Fund for the Promotion of Research at the Technion. Part of this research was done while the author was at Aiken
Computation Lab., Harvard University, Supported by research contracts ONR-N0001491-J-1981 and NSF-CCR-90-07677.
E-mail: eyalk@cs.technion.ac.il .
x Laboratory for Computer Science, MIT. Supported by NSF Grant CCR-9121466.
- Bell Communication Research, MCC-1C365B 445 South Street Morristown, New Jersey 07960-6438. E-mail:
rafail@bellcore.com.
honest-but-curious players malicious players
computational model [Yao-82, GMW-87]
(assuming trapdoor permutations t -
private-channels model
Figure
1: The number of faulty players, t, tolerable in each of the basic secure computation models (with
general protocols for secure computation were given in [Yao-82, Yao-86] for the two-party case, and by
[GMW-87] for the multi-party case. Other solutions were given in, e.g., [GHY-87, GV-87, BGW-88,
CCD-88, BB-89, RB-89, CKOR-97] based on various assumptions (either intractability assumptions or
physical assumptions such as the existence of private (untappable) communication channels between each
pair of players). These solutions give t-privacy for ndepending on the assumption made.
(See
Figure
1 for a summary of the main results.)
Secure computation for honest-but-curious players. Honest-but-curious players must always
follow the protocol precisely but are allowed to "gossip" afterwards. Namely, some of the players may
put together the information in their possession at the end of the execution in order to infer additional
information about the original individual inputs. It should be realized that in this honest-but-curious model
enforcing the correctness constraint is easy, but enforcing the privacy constraint is hard. The honest-but-
curious scenario is not only interesting on its own (e.g., for modeling security against outside listeners or
against passive adversary that wants to remain undetected); its importance also stems from "compiler-
type" theorems, such as the one proved by [GMW-87] 1 (with further extensions in many subsequent papers,
for example, [BGW-88, CCD-88, RB-89]). This type of theorems provide algorithms for transforming any
t-private protocol with respect to honest-but-curious players into a t 0 -private protocol with respect to
malicious players (t 0 - t). Surprisingly, much of the research efforts were devoted to the more complicated
case of malicious players, while the case of honest players is far from being well understood. In this paper
we examine the latter setting.
Information theoretic privacy. 2 The information theoretic model was first examined by [BGW-88,
CCD-88]. In particular, they prove that every function is dn=2e-private (in the setting of honest-but-
curious players; see Figure 1). The information theoretic model was then the subject of considerable work
(e.g., [CKu-89, BB-89, CGK-90, CGK-92, CFGN-96, KOR-96, HM-97, BW-98]). Particularly, [CKu-89]
characterized the boolean functions for which n-private protocols exist: an n-argument boolean function
f is n-private if and only if it can be represented as
where each f i is boolean. Namely, f is n-private if and only if it is the exclusive-or of n local functions.
An immediate corollary of this is that most boolean functions are not n-private (even with respect to
honest-but-curious players).
Our contribution. We formally define the notion of reducibility among multi-party protocol problems.
We say that f is reducible to g, if there is a protocol that allows the n players to compute the value of f
1 The reader is referred to [G-98] for a fully detailed treatment of the [Yao-82, GMW-87] results.
2 in oppose to computational-privacy
n-privately, in the information theoretic sense, just by repeatedly using a black-box (or a trusted party) for
computing g. That is, in any round of the protocol, the players secretly supply arguments to the black-box
and then the black-box publicly announces the result of operating g on these arguments. We stress that
the only means of communication among the players is by interacting with the black-box (i.e., evaluating
g). For example, it is clear that every function is reducible to itself (all players secretly give their private
inputs to the black-box and it announces the result). Naturally, we can also define the notion of
completeness. A function g is complete if every function f is reducible to g. The importance of this notion
relies on the following observation:
If g is complete, and g can be computed t-privately in some "reasonable" setting 3 (such as
the settings of [GMW-87, BGW-88] etc.), then any function f can be computed t-privately in
the same setting. Moreover, from our construction a stronger result follows: if in addition the
implementation of g is efficient then so is the implementation of f (see below).
The above observation holds since our definition of reduction requires the highest level of privacy (n), the
strongest notion of privacy (information theoretic), a simple use of g (black box), and it avoids making any
(physical or computational) assumptions. Hence the straightforward simulation, in which each invocation
of the black-box for g is replaced by an invocation of a "t-private" protocol for g, works in any "reasonable"
setting (i.e. any setting which is not too weak to prevent simulation) and yields a "t-private" protocol
for f . Previously, there was no easy way to translate protocols from one model (such as the models of
[Yao-82, GMW-87, BGW-88, CCD-88, RB-89, FKN-94]) to other models.
It can be seen that if g is complete then g itself cannot be n-private. The inverse is the less obvious
part: since the definition of completeness requires that the same function g will be used for computing all
functions f , and since the definition of reductions seems very restrictive, it may be somewhat surprising
that complete functions exist at all. Some examples of complete functions implicitly appear in the literature
(without discussing the notions of reducibility and completeness). First such results were shown in [Yao-82,
GMW-87, K-88].
In this work we prove the existence of complete functions for n-private computations. Moreover, while
previous research concentrated on finding a single complete function, our main theorem characterizes all
the boolean functions which are complete:
Main Theorem: For all n - 2, an n-argument boolean function g is complete if and only if g is not n-private.
Our result thus shows a very strong dichotomy: every boolean function g is either "simple enough" so
that it can be computed n-privately (in the information theoretic model), or it is "sufficiently expressive" so
that a black-box for it enables computing any function (not only boolean) n-privately (i.e., g is complete).
We stress that there is no restriction on g, beside being non-n-private boolean function, and that no
relation between the function g and the function f that we wish to compute is assumed. Note that using
the characterization of [CKu-89] it is easy to determine whether a given boolean function g is complete.
That is, a boolean function g is complete if and only if it cannot be represented as
each g i is boolean.
Some features of our result. To prove the completeness of a function g as above, we present an
appropriate construction with the following additional properties:
ffl We consider the most interesting scenario, where both the reduced function, f , and the function g are
n-argument functions (where n is the number of players). This enables us to organize the reduction in
rounds, where in each round each player submits a value of a single argument to g (and the value of
3 A setting consists of defining the type of communication, type of privacy, assumptions made etc.
each argument is supplied by exactly one player). 4 Thus, no player is "excluded" at any round from
the evaluation of g. Our results however remain true even if the number of arguments of g is different
from the number of arguments of f .
ffl Our construction evaluates the n-argument function g only on a constant number of n-tuples (hence,
a partial implementation of g may be sufficient).
ffl When we talk about privacy, we put no computational restrictions on the power of the players; hence
we get information-theoretic privacy. However, when we talk about protocols, we measure their
efficiency in terms of the computational complexity of f (i.e., the size of a circuit that computes f );
and in terms of a parameter k (our protocol allows error probability of 2 \Gamma\Omega\Gamma The protocol we
introduce is efficient (polynomial) in all these measures. 5 We stress, though, that the n-tuples with
which we use the function g are chosen non-uniformly (namely, they are encoded in the protocol) for
the particular choices of g and n (the size of the network). These n-tuples do not depend though
neither on the size of the inputs to the protocol nor on the function f .
Our main theorem gives a full characterization of the boolean functions g which are complete (those
that are not n-private). When non-boolean functions are considered, it turns out that the above simple
characterization is no longer true. That is, we show that there are (non-boolean) functions which are not
n-private, yet are not complete.
Overview of the proof. Our proof goes along the following lines:
1. We define the notion of embedded-OR for two-argument functions and appropriately generalize this
notion to the case of n-argument functions. We then show that if an n-argument function is not
private then it contains an embedded-OR. For the case immediately from the
characterization of [CKu-89]; the case n ? 2 requires some additional technical work.
2. We show how an embedded-OR can be used to implement an Oblivious Transfer (OT) channel/primitive. 6
(It should be emphasized that an OT channel in a multi-party setting has the additional requirement
that listeners do not get any information; we prove however that this property is already implied by
the basic properties of two-party OT). Finally, it follows from the work of [GHY-87, GV-87, K-88,
BG-89, GL-90] that n-private computation of any function f can be implemented given OT channels.
All together, our main theorem follows.
Organization of the paper. In Section 2 we specify our model and provide some necessary definitions.
In Section 3 we prove our main lemma that shows the existence of an embedded-OR in every non n-
private, boolean function; In Section 4 we use the main lemma (i.e., the existence of an embedded-OR)
to implement OT channels between players; In Section 5 we use the construction of OT channels to prove
our main theorem. Finally, Section 6 contains a discussion of the results and some open problems. For
completeness, we include in the appendix a known protocol for private computations using OT channels
(including its formal proof).
Which player submits which argument is a permutation specified by the reduction.
Evaluating g on any assignment is assumed to take a unit time. All other operations (communication, computation steps,
etc.) are measured in the regular way.
6 Oblivious transfer is a protocol for two players: a sender that holds two bit b0 and b1 and a receiver that holds a selection
bit s. At the end of the protocol the receiver gets the bit bs but has no information about the value of the other bit, while
the sender has no information about s.
Model and Definitions
Let f be an n-argument function defined over a finite domain D. Consider a collection of n - 2 synchronous,
computationally unbounded players that communicate using a black-box for g, as described
below. At the beginning of an execution, each player P i has an input x i 2 D. In addition, each player
can flip unbiased and independent random coins. We denote by r i the string of random bits flipped by P i
(sometimes we refer to the string r i as the random input of P i ). The players wish to compute the value
of a function f(x 1 To this end, they use a prescribed protocol F . In the i-th round of the
protocol, every processor P j secretly sends a message m i
j to the black-box g. 7 The protocol F specifies
which argument to the black-box is provided by which player. The black-box then publicly announces the
result of evaluating the function g on the input messages.
Formally, with each round i the protocol associates a permutation - i . The value computed by the
black-box at round i, denoted s i , is s
Each message m i
sent by P j to the
black-box in the i-th round, is determined by its input and the output of the
black-box in previous rounds We say that the protocol F computes the function f if the
last value (or the last sequence of values in the case of non-boolean f) announced by the black-box equals
the value of is a (confidence) parameter and the
probability is over the choice of r
Let F be an n-party protocol, as described above. The communication S(~x; ~r) is the concatenation
of all messages announced by the black-box, while executing F on inputs x inputs
We often consider the communication S while fixing ~x and some of the r i 's; in this case, the
communication should be thought of as a random-variable where each of the r i 's that were not fixed is
chosen according to the corresponding probability distribution. For example, if T is a set of players then
variable describing the communication when each player P i holds input x i , each
player in T holds random input r i , and the random inputs for all players in T are chosen randomly. The
definition of privacy considers the distribution of such random variables.
F be an n-party protocol which computes a function f , and let T ' ng be a
set of players (coalition). We say that coalition T does not learn any additional information from the
execution of F if the following holds: For every two input vectors ~x and ~y that agree on their T entries
(i.e. every choice of random inputs for the coalition's
parties, fr i g i2T , and for every communication S
Informally, this definition implies that for all inputs which "look the same" from the coalition's point
of view (and for which, in particular, f has the same value), the communication also "look the same" (it
is identically distributed). Therefore, by executing F , the coalition T cannot infer any information on the
inputs of T , other than what follows from the inputs of T and the value of the function.
computing f , using a black-box g, is t-private if any coalition T of at most
players does not learn any additional information from the execution of the protocol. A function f is
t-private (with respect to the black-box g) if there exists a t-private protocol that uses the black-box g and
computes f .
be an n-argument function. We say that the black-box g (alternatively, the function
g) is complete if every function f is n-private with respect to the black-box g.
7 Notice that we do not assume private point-to-point communication among players. On the other hand, we do allow
private communication between players and the black-box for computing g.
Oblivious Transfer is a protocol for two players S, the sender , and R, the receiver. It was first defined
by Rabin [R-81] and since then was studied in many works (e.g., [W-83, FMR-85, K-88, IL-89, OVY-91]).
The variant of OT protocol that we use here, which is often referred to as
originally defined in
[EGL-85]. It was shown equivalent to other notions of OT (see, for example [R-81, EGL-85, BCR-86, B-86,
C-87, K-88, CK-88]). The formalization of OT that we give is in terms of the probability distribution of
the communication transcripts between the two players:
Definition 4 Oblivious Transfer (OT): Let k be a (confidence) parameter. The sender S initially has two
bits b 0 and b 1 and the receiver R has a selection bit c. After the protocol completion the following holds:
Correctness: Receiver R gets the value of b c with probability greater than 1\Gamma2 \Gamma\Omega\Gamma where the probability
is taken over the coin-tosses of S and R. More formally, let r S , r R 2 f0; 1g poly(k) be the random tapes of
S and R respectively, and denote the communication string by comm(fb
(Again, when one (or both) of r S ; r R is unspecified then comm becomes a random variable.) Then, for
all k and for all c; b the following holds:
Pr r S ;r R
(R(c;
(R(c; r R ; comm) denotes the output of receiver R when it has a selection bit c, random input r R and
the communication in the protocol is comm.)
Sender's Privacy: Receiver R does not get any information about b 1\Gammac . (In other words, R has the
"same view" in the case where b and in the case where b 1). Formally, for all k, for all
c; b c 2 f0; 1g, for all r R and for all communication comm:
Pr r S
Receiver's Privacy: Sender S does not get any information about c. (In other words, S has the "same
view" in the case where and in the case where c = 1). Formally, for all k, for all b
for all r S and for all communication comm:
Pr r R
REMARK: We emphasize that both S and R are honest (but curious) and assumed to follow the
protocol. When OT is defined with respect to cheating players, it is usually allowed that with probability
information will leak. This however is not needed for honest players.
3 A New Characterization of n-private Boolean Functions
In this section we prove our main lemma which establishes a new combinatorial characterization of the
family of n-private boolean functions. First, we define what it means for a two-argument boolean function
to have an "embedded-OR" and use [CKu-89] to claim that any two-argument boolean function which is
not 1-private contains an embedded-OR. We then generalize the definition and the claim to multi-argument
functions in the appropriate way.
Definition 5 We say that a two-argument function h contains an embedded-OR if there exist inputs
and an output value oe such that h(x 1
Definition 6 We say that an n-argument (n - contains an embedded-OR if there exist
jg, such that the two-argument function
contains an embedded-OR.
The following facts are proven in [CKu-89] (or follow trivially from it):
1. An n-argument boolean function is dn=2e-private if and only if it can be written as f(x
2. A two-argument boolean function f is not 1-private if and only if it contains an embedded-OR.
3. If an n-argument boolean function is dn=2e-private then it is n-private.
4. An n-argument boolean function f is dn=2e-private if and only if in every partition of the indices
ng into two sets
S, each of size at most dn=2e, the two-argument boolean function f S
defined by
is 1-private.
Our main lemma extends Fact 2 above to the case of multi-argument functions.
boolean, n-argument function. The function g is
not dn=2e-private if and only if it contains an embedded-OR.
Proof: Clearly, if g contains an embedded-OR then there is a partition of the indices, as in Fact 4, such
that the corresponding two-argument function g S contains an embedded-OR (e.g., if are the indices
guaranteed by Definition 6 then include the index i in S, the index j in -
S, and partition the other
indices arbitrarily into two halves between S and -
S). Hence, g S is not 1-private and so, by Fact 4, g is not
dn=2e-private.
For the other direction, since g is not dn=2e-private then, again by Fact 4, there is a partition
S of
the indices ng such that g S is not 1-private. For simplicity of notations, we assume that n is even
and that n=2g. By Fact 2, the two-argument function g S contains an embedded-OR. Hence,
by Definition 5, there exist inputs u; v; w; z and a value oe 2 f0; 1g which form the following structure:
where u 6= v and w 6= z. To complete the proof, we will show below that it is possible to choose these
four inputs so that u i 6= v i for exactly one coordinate i and w j 6= z j for exactly one coordinate j (this will
show that g satisfies the condition of Definition 6). To this end, we will first show how based on the inputs
above we can find u 0 and v 0 which are different in exactly one coordinate. Then, based on the new u
and a similar argument, we can find w which are different in exactly one coordinate. All this process is
done in a way that maintains the OR-like structure, and therefore, by using the above values of i; j, fixing
all the other arguments in S to u 0
k and all the other arguments in -
S to w 0
k , we get that g itself
contains an embedded-OR.
ng be the set of indices on which u and v disagree (i.e., indices k such that u k 6= v k ).
Define the following sets of vectors: Tm is the set of all vectors that can be obtained from the vector u by
replacing the value u k in exactly m coordinates from L (in which v k 6= u k ) by the value v k . In particular,
fvg. In addition, we define the following two sets of vectors:
and
where w and z are the specific vectors we choose above. In particular, we have
We now claim that there must exist Namely, the vector u 0 is in X 1 , the vector v 0 is in
differ in exactly one coordinate. Suppose, towards a contradiction, that this is not true (i.e.,
no such We will show that this implies that Tm ' X 1 , for all contradicting the
fact that v which is in T jLj belongs to X 2 . The proof is by induction. It is true for contains
only u which is in X 1 . Suppose the induction hypothesis holds for m. That is, Tm ' X 1 . For each vector
x in Tm+1 , there is a vector in Tm which differs from x in exactly one coordinate. Since we assumed that
as above do not exist, this immediately implies that x is also in X 1 hence Tm+1 ' X 1 , as needed.
Therefore, we reached a contradiction which implies the existence of u That is, we found
that differ in a single index i (i.e., u 0
and such that u still form an OR-like structure:
oe
A similar argument shows the existence of w that differ in a single index j and such that the vectors
form an OR-like structure:
oe
This shows that g contains an embedded-OR (with indices required by Definition 6). 2
Constructing Embedded Oblivious Transfer
The first, very simple, observation is that given a black-box for a function g that contains an embedded-
OR, we can actually compute the OR of two bits. That is, suppose that the n players wish to compute
is a bit held by player P k and b ' is a bit held by player P ' . Let
the indices and inputs as guaranteed by Definitions 5 and 6. Then, player P k will provide the black box
with the i-th argument which is x b k
(i.e., if b then the argument provided by P k is x 0 and if b
then the argument is x 1 ) and player P ' will provide the black box with the j-th argument which is x b '
The other players will provide the fixed values a in an
arbitrary order. The black-box will answer with the value
which is oe if OR(b different than oe if OR(b Hence, we showed how to compute
Our main goal in this section is to show how, based on a black-box that can compute OR we can
implement an Oblivious Transfer (OT) protocol. We start with the two-party case
proceed to the general case which builds upon the two-party case.
4.1 The Two-Party Case
In this section we show how to implement a two-party OT protocol. We start by implementing a variant
of OT, called random OT (or ROT for short), which is different than the standard OT (i.e.,
In a
ROT protocol the sender S has a bit s to be sent. At the end of the protocol, the receiver R gets a bit
s 0 such that with probability 1=2 the bit s 0 equals s and with probability 1=2 the bit s 0 is random. The
receiver knows which of the two cases happened and the sender has no idea which is the case. We start
with a formal definition of the ROT primitive:
Definition 7 Random Oblivious Transfer (ROT): Let k be a (confidence) parameter. The sender S initially
has a single input bit s (and the receiver has no input). After the protocol completion the following
holds:
Correctness: With probability greater than outputs a pair of bits
is referred to as the indicator (otherwise R outputs fail). (As usual, the probability is taken over
the coin-tosses of S and R, i.e., r S , r R 2 f0; 1g poly(k) .) Moreover, if the output of R satisfies
(i.e.,
Pr r S ;r R
Sender's Privacy: The probability that R outputs a pair exactly 1=2. That is,
Pr r S ;r R
Receiver's Privacy: Sender S does not get any information about I. (In other words, S has the "same
view" in the case where I = 0 and in the case where I = 1). Formally, for all k, for all s 2 f0; 1g, for
all r S and for all communication comm:
Pr r R (comm(s; r
Transformations of ROT protocols to
are well-known [C-87]. 8 Our ROT protocol is
implemented as follows:
8 Assume that the sender, S, has two bits b0 ; b1 and the receiver, R, has a selection bit c. The players S and R repeat the
following for at most times: at each time S tries to send to R a pair of random bits (s1 ; s2) using two invocations
of ROT. If in both trials the receiver gets the actual bit or in both trials he gets a random bit then they try for another time.
If the receiver got exactly one of s1 and s2 he sends the sender a permutation of the indices - (i.e., either (1; 2) or (2; 1)) such
that s -(c) is known to him. The sender replies with b1 \Phi s . The receiver can now retrieve the bit bc and knows
nothing about the other bit. The sender, by observing - learns nothing about c (since he does not know from the invocation
of the ROT protocols in which invocation the receiver got the actual bit and in which he got a random bit). Thus, we get a
protocol based on the ROT protocol.
a. The sender, S, and the receiver, R, repeat the following until c (and at most
S chooses a pair (a 1 ; a 2 ) out of the two pairs f(1; 0); (0; 1)g, each with probability 1=2.
R chooses a pair (b out of the three pairs f(1; 0); (0; 1); (1; 1)g, each with probability 1=3.
S and R compute (using the black-box) c
b. If c to R. The receiver R outputs
outputs in addition, R outputs s
c. If in all m times no choices (a 1 ; a 2 ) and (b are such that c the protocol halts and R
outputs fail.
To analyze the protocol we observe the following properties of it:
1. If (b This happens in two of the six choices of (a 1 ; a 2 ) and
(b In each of the other four choices we get c Therefore,
the probability of failure in exponentially small.
2. Conditioned on the case c out of the
four remaining cases) and (b
3. In case that (b 1
In this case R outputs I = 1, as needed.
In case that (b 1 1), each of the two choices of (a 1 ; a 2 ) is equally likely and therefore a 1 and
hence also w and s 0 are random (i.e., each has the value 0 with probability 1=2 and the value 1 with
probability 1=2). In this case R outputs I = 0, as needed.
4. As argued in 3, if the protocol does not fail then R knows the "correct" value of I (since he knows
the values of b The sender, on the other hand, based on (a 1 ; a 2 ) cannot know which
of the two equally-probable events, (b happened and therefore he
sees the same view whether we are in the case I = 1 or in the case I = 0.
Properties 1 and 3 above imply the correctness of the ROT protocol while properties 2 and 4 imply the
sender's privacy and receiver's privacy (respectively). Hence, combining the above construction (including
the transformation of the ROT protocol to a
Lemma 2 An OT-channel between two players is realizable given a black-box g, for any non-2-private
function g.
4.2 The Multi-Party Case (n ?
We have shown in our main lemma (Lemma 1) that any non n-private function g contains an embedded-
OR. Thus, as explained above, we can use the black-box for g to compute the OR of two bits held by
two players P k and P ' (where the other players assist by specifying the fixed arguments given by
our main lemma). Then, based on the ability to compute OR, we showed in Section 4.1 above how any
two players can implement an OT channel between them in a way that satisfies the properties of OT (in
particular, the privacy of the sender and the receiver with respect to each other). However, there is a
subtle difficulty in implementing a private OT-channel in a multi-player system which we must address:
beside the usual properties of an OT channel (as specified by Definition 4), we should guarantee that the
information transmitted between the two owners of the channel will not be revealed to potential listeners
(i.e., the other players). If the OT channel is implemented "physically" then clearly no information
is revealed to the listeners. However, since we implement OT using a black-box to some function g, which
publicly announces each of its outcomes, we must also prove that this reveals no information to the listeners.
That is, the communication comm should be distributed in the same way, for all values of b
The following lemma shows that the security of the OT protocol with respect to listeners is, in fact,
already guaranteed by the basic properties of the OT protocol; namely, the security of the protocol with
respect to both the receiver and the sender.
Lemma 3 Consider any (two-player) OT protocol. For every possible communication comm, the probability
Pr r S ;r R
is the same for all values b 0 and b 1 for the sender and
c for the receiver. (In other words, a listener sees the same probability distribution of communications no
matter what are the inputs held by the sender and the receiver in the OT protocol.)
Proof: Consider the following 8 probabilities corresponding to all possible values of b
1. Pr r S ;r R
2. Pr r S ;r R
3. Pr r S ;r R
4. Pr r S ;r R
5. Pr r S ;r R
7. Pr r S ;r R
8. Pr r S ;r R
The receiver's privacy property implies that the terms (1) and (2) are equal, (3) and (4) are equal, (5) and
are equal, and (7) and (8) are equal. The sender's privacy property implies that the terms (1) and (3)
are equal, (5) and (7) are equal, (2) and (6) are equal, and (4) and (8) are equal. All together, we get that
all 8 probabilities are equal, as desired. 2
5 A Completeness Theorem for Multi-Party Boolean Black-Box Reduction
In this section we state the main theorem and provide its proof. It is based on a protocol that can tolerate
honest-but-curious players, assuming the existence of an OT-channel between each pair of players.
Such protocols appear in [GHY-87, GV-87, K-88, BG-89, GL-90] (these works deal also with malicious
players). That is, by these works we get the following lemma (for self-containment, both a protocol and
its proof of security appear in the appendix):
Lemma 4 Given OT channels between each pair of players, any n-argument function f can be computed
n-privately (in time polynomial in the size of a boolean circuit for f ).
We are now ready to state our main theorem:
Theorem 1 (MAIN:) Let n - 2 and let g be an n-argument boolean function. The function g is complete
if and only if it is not n-private.
Proof:
First, we show that any complete g cannot be n-private. Towards the contradiction let us assume
that there exists such a function g which is n-private and complete. This implies that all functions are
n-private (as instead of using the black-box g the players can evaluate g by using the n-private protocol for
g). This however contradicts the results of [BGW-88, CKu-89] that show the existence of functions which
are not n-private.
Next (and this is where the bulk of the work is) we show how to compute any function n-privately,
given a black-box for any g which is not n-private. Recall that there exists a protocol that can tolerate
honest-but-curious players, assuming the existence of OT-channels (Lemma 4). Also, we have shown
how a black-box, computing any non-private function, can be used to simulate OT channels (Lemma 2
and 3). Combining all together we get the result. 2
The theorem implies that "most" boolean functions are complete. That is, any boolean function which is
not of the XOR-form of [CKu-89] is complete.
6 Conclusions and Further Extensions
6.1 Non-boolean Functions
We have shown that any non-n-private boolean function g is complete. Namely, a black-box for such a
function g can be used for computing any function f n-privately. Finally, let us briefly turn our attention
to non-boolean functions. First, we emphasize that if a function g contains an embedded-OR then it is
still complete even if it is non-boolean (all the arguments go through as they are; in particular note that
Definitions 5 and 6 of embedded-OR apply for the non-boolean case as well). For the non-boolean case,
we can state the following proposition:
Proposition 2 For every n - 2 there exists a (non-boolean) n-argument function g which is not n-private,
yet such that g is not complete.
Proof: The proof for 2-argument g is as follows: there are non-private two-argument functions which do
not contain an embedded OR. Examples of such functions were shown in [Ku-89] (see Figure 2). We now
show that with no embedded-OR one cannot compute the OR function. Assume, towards a contradiction,
that there is some two-argument function f which does not have an embedded-OR, yet it could be used to
compute the OR function. Since f can be used to compute the OR function, we can use it to implement
OT (Lemma 2). Hence, there exists an implementation of OT based on some f which does not have an
embedded-OR. However, [K-91] has shown that for two-argument functions, only the ones that contain an
embedded-OR, can be used to implement OT, deriving a contradiction.
For n-argument functions, notice that if we define a function g (on n arguments) to depend only on its
first two arguments, we are back to the 2-argument case, as the resulting function is not n-private. 2
To conclude, we have shown that for boolean case, the notions of completeness and privacy are exactly
complementary , while for the non-boolean case they are not .
Figure
2: A non-private function which does not contain an embedded-OR
6.2 Additional Remarks
In this section, we briefly discuss some possible extensions and easy generalizations of our results.
The first issue that we address is the need for the protocol to specify the permutation - i that is used in
each round i (for mapping the players to the arguments for the black-box g). Note that in our construction,
we use the black-box only for computing the OR function on two arguments. For this, we need to map
some two players P k and P ' , holding these two arguments, to the special coordinates i; j, guaranteed by
the definition of embedded-OR. Therefore, without loss of generality, the sequence of permutations can
be made oblivious (i.e., independent of the function f computed) at a price of O(n 2 ) multiplicative factor
to the rounds (and time). Moreover, at a price of O(n 4 ) the sequence of permutations can even be made
independent of the non-n-private function g. Finally, note that if g is a symmetric function (which is often
the "interesting" case), then there is no need to permute the inputs to g.
Next, we recall the assumption that the number of arguments of g is the same as the number of
arguments of f (i.e., n). Again, it follows from our constructions that this is not essential to any of our
results: all that is needed is the ability for the two players that wish to compute the OR function
in a certain step, to do so by providing the two distinguished arguments and all the other (fixed)
arguments can be provided by arbitrary players (e.g., all of them by P 1 ).
In our definitions we require perfect privacy. That is, we require that the two distributions in Definition 1
are identical. One can relax this definition of privacy to require only statistical indistinguishability of
distributions or only computational indistinguishability of distributions. For these definitions we refer the
reader to the papers mentioned in the introduction. Note that if f can be computed "privately", under any
of these notions, using a black-box for g and if g can be computed t-privately, under any of these notions,
then also the function f can be computed t-privately, under the appropriate notion of privacy (i.e., the
weaker among the two).
Finally, we note that the negative result of [CKu-89] allows a probability of error; hence, even a weaker
notion of reduction that allows for errors in computing f does not change the family of complete functions.
This impossibility result (i.e., first direction of the main theorem) still holds even if we allow the players
to communicate not only using the black-box but also using other types of communication such as point-
to-point communication channels.
6.3 Open Questions
The above results can be easily extended to show that any boolean g which is complete can also be used for a
private computation of any multi-output function f (i.e., a function whose output is an n-tuple (y
where y i is the output that should be given to P i ). This is so, because Lemma 4 still holds. On the other
hand, it is an interesting question to characterize the multi-output functions g that are complete (even in
the boolean case where each output of g is in f0; 1g).
It is not clear how to extend the model and the results to the case of malicious players in its full
generality. Notice, however, that under the appropriate definition of the model, if we are given as a black-box
the two-argument OR function we can still implement private channels (see [KMO-94] for details),
and hence by [BGW-88, CCD-88] can implement any f , n=3-privately with respect to malicious players.
Suppose that we relax the notion of privacy to computational-privacy (as in [Yao-82, GMW-87]).
In such a case, any computationally n-private implementation of an (information-theoretically) non-n-
private (equivalently, complete) boolean function g implies the existence of a one-way function. This is
so, since we have shown that such an implementation of g implies an implementation of OT, which in
turn implies the existence of a one-way function by [IL-89]. However, the best known implementation of
such protocols, for a function g as above, requires trapdoor one-way permutations [GMW-87]. It is an
important question whether there exists an implementation based on a one-way function (or permutation)
for functions without trap-door. This question has only some partial answers. In particular, when one of
the players has super-polynomial power, this is possible [OVY-91]. However, if we focus on polynomial-time
players and protocols, then the result of our paper together with the work of [IR-89] implies that for
all complete functions, if we use only black-box reductions, this is as difficult as separating P from NP .
Thus, using black-box reductions, complete functions seem to be hard to implement (with computational
privacy) without a trapdoor property. Notice, however, that for non-boolean functions we have shown that
there are functions which are not n-private and not complete. It is not known even if these functions can
be implemented without using trapdoor, although the results of [IR-89] do not apply to this case.
Acknowledgments
We wish to thank Oded Goldreich for helpful discussions and very useful com-
ments. We thank Mihir Bellare for pointing out to us in 1991 that the works of Chor, Kushilevitz and
Kilian are complementary and thus imply a special case of our general result. Finally, we thank Amos
Beimel for helpful comments.
--R
Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation
Applications of Oblivious Transfer
Minimum Disclosure Proofs of Knowledge
Information Theoretic Reductions among Disclosure Problems
Multiparty Computation with Faulty Majority
Adaptively Secure Multi-Party Computation
Multiparty Unconditionally Secure Protocols
Private Computations Over the Integers
On the Structure of the Privacy Hierarchy
Equivalence between Two Flavors of Oblivious Transfer
A Randomized Protocol for Signing Contracts
A minimal model for secure computation
Cryptographic Computation: Secure Fault-Tolerant Protocols and the Public-Key Model
Secure Multi-Party Computation
How to Play any Mental Game
How to Solve any Protocol Problem - An efficiency Improvement
Fair Computation of General Functions in Presence of Immoral Majority
The Knowledge Complexity of Interactive Proof- Systems
Complete Characterization of Adversaries Tolerable in Secure Multi-Party Computation
On the Limitations of certain One-Way Permutations
Basing Cryptography on Oblivious Transfer
Completeness Theorem for Two-party Secure Computation
Privacy and Communication Complexity
Characterizing Linear Size Circuits in Terms of Privacy
Amortizing Randomness in Private Multiparty Computations
Reducibility and Completeness In Multi-Party Private Computations
A Randomness-Rounds Tradeoff in Private Computation
Fair Games Against an All-Powerful Adversary
Verifiable Secret Sharing and Multiparty Protocols with Honest Ma- jority
How to Exchange Secrets by Oblivious Transfer
Protocols for Secure Computations
--TR
--CTR
Danny Harnik , Moni Naor , Omer Reingold , Alon Rosen, Completeness in two-party secure computation: a computational view, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA | reducibility;oblivious-transfer;completeness;private computation |
345785 | Application-Controlled Paging for a Shared Cache. | We propose a provably efficient application-controlled global strategy for organizing a cache of size k shared among P application processes. Each application has access to information about its own future page requests, and by using that local information along with randomization in the context of a global caching algorithm, we are able to break through the conventional $H_k \sim \ln k$ lower bound on the competitive ratio for the caching problem. If the P application processes always make good cache replacement decisions, our online application-controlled caching algorithm attains a competitive ratio of $2H_{P-1}+2 \sim 2 \ln P$. Typically, P is much smaller than k, perhaps by several orders of magnitude. Our competitive ratio improves upon the 2P+2 competitive ratio achieved by the deterministic application-controlled strategy of Cao, Felten, and Li. We show that no online application-controlled algorithm can have a competitive ratio better than min{HP-1,Hk}, even if each application process has perfect knowledge of its individual page request sequence. Our results are with respect to a worst-case interleaving of the individual page request sequences of the P application processes.We introduce a notion of fairness in the more realistic situation when application processes do not always make good cache replacement decisions. We show that our algorithm ensures that no application process needs to evict one of its cached pages to service some page fault caused by a mistake of some other application. Our algorithm not only is fair but remains efficient; the global paging performance can be bounded in terms of the number of mistakes that application processes make. | Introduction
Caching is a useful technique for obtaining high performance in these days where the
latency of disk access is relatively high. Today's computers typically have several
application processes running concurrently on them, by means of time sharing and
multiple processors. Some processes have special knowledge of their future access
patterns. Cao et al [CFL94a, CFL94b] exploit this special knowledge to develop
effective file caching strategies.
An application providing specific information about its future needs is equivalent
to the application having its own caching strategy for managing its own pages in cache.
We consider the multi-application caching problem, formally defined in Section 3, in
which P concurrently executing application processes share a common cache of size k.
In Section 4 we propose an online application-controlled caching scheme in which
decisions need to be taken at two levels: when a page needs to be evicted from cache,
the global strategy chooses a victim process, but the process itself decides which of
its pages will be evicted from cache.
Each application process may use any available information about its future page
requests when deciding which of its pages to evict. However, we assume no global
information about the interleaving of the individual page request sequences; all our
bounds are with respect to a worst-case interleaving of the individual request sequences
Competitive ratios smaller than the H k lower bound for classical caching [FKL
are possible for multi-application caching, because each application may employ future
information about its individual page request sequence. 1 The application-controlled
algorithm proposed by Cao, Felten, and Li [CFL94a] achieves a competitive ratio
of which we prove in the Appendix. We show in Sections 5-7 that our
new online application-controlled caching algorithm improves the competitive ratio
to which is optimal up to a factor of 2 in the realistic scenario
when k. (If we use the algorithm of [FKL + 91] for the case P - k, the resulting
bound is optimal up to a factor of 2 for all P .) Our results are significant since P is
often much smaller than k, perhaps by several orders of magnitude.
In the scenario where application processes occasionally make bad page replacement
decisions (or "mistakes"), we show in Section 8 that our online algorithm incurs
very few page faults globally as a function of the number of mistakes. Our algorithm
is also fair, in the sense that the mistakes made by one processor in its page
replacement decisions do not worsen the page fault rate of other processors.
Classical Caching and Competitive Analysis
The well-known classical caching (or paging) problem deals with a two-level memory
hierarchy consisting of a fast cache of size k and slow memory of arbitrary size. A
Here Hn represents the nth harmonic number
sequence of requests to pages is to be satisfied in their order of occurrence. In order
to satisfy a page request, the page must be in fast memory. When a requested page
is not in fast memory, a page fault occurs, and some page must be evicted from fast
memory to slow memory in order to make room for the new page to be put into fast
memory. The caching (or paging) problem is to decide which page must be evicted
from the cache. The cost to be minimized is the number of page faults incurred over
the course of servicing the page requests.
Belady [Bel66] gives a simple optimum offline algorithm for the caching problem;
the page chosen for eviction is the one in cache whose next request is furthest in
the future. In order to quantify the performance of an online algorithm, Sleator
and Tarjan [ST85] introduce the notion of competitiveness, which in the context of
caching can be defined as follows: For a caching algorithm A, let FA (oe) be the number
of page faults generated by A while processing page request sequence oe. If A is a
randomized algorithm, we let FA (oe) be the expected number of page faults generated
by A on processing oe, where the expectation is with respect to the random choices
made by the algorithm. An online algorithm A is called c-competitive if for every
page request sequence oe, we have FA (oe) - c \Delta FOPT (oe) fixed
constant. The constant c is called the competitive ratio of A. Under this measure, an
online algorithm's performance needs to be relatively good on worst-case page request
sequences in order for the algorithm to be considered good.
Sleator and Tarjan [ST85] show a lower bound of k on the competitive ratio of
deterministic caching algorithm. Fiat et al [FKL prove a lower bound of H k if
randomized algorithms are allowed. They also give a simple and elegant randomized
algorithm for the problem that achieves a competitive ratio of 2H k . Sleator
and McGeoch [MS91] give a rather involved randomized algorithm that attains the
theoretically optimal competitive ratio of H k .
3 Multi-application Caching Problem
In this paper we take up the theoretical issue of how best to use application pro-
cesses' knowledge about their individual future page requests so as to optimize caching
performance. For analysis purposes we use an online framework similar to that of
As mentioned before, the caching algorithms in [FKL
use absolutely no information about future page requests. Intuitively, knowledge
about future page requests can be exploited to decide which page to evict from the
cache at the time of a page fault. In practice an application often has advance knowledge
of its individual future page requests. Cao, Felten and Li [CFL94a, CFL94b]
introduced strategies that try to combine the advance knowledge of the processors in
order to make intelligent page replacement decisions.
In the multi-application caching problem we consider a cache capable of
storing k pages that is shared by P different application processes, which
we denote Each page in cache and memory belongs to
ONLINE ALGORITHM FOR MULTI-APPLICATION CACHING 3
exactly one process. The individual request sequences of the processes
may be interleaved in an arbitrary (worst-case) manner.
Worst-case measure is often criticized when used for evaluating caching algorithms
for individual application request sequences [BIRS91, KPR92], but we feel that the
worst-case measure is appropriate for considering a global paging strategy for a cache
shared by concurrent application processes that have knowledge of their individual
page request sequences. The locality of reference within each application's individual
request sequence is accounted for in our model by each application process's knowledge
of its own future requests. The worst-case nature of our model is that it assumes
nothing about the order and durations of time for which application processes are
active. In this model our worst-case measure of competitive performance amounts to
considering a worst-case interleaving of individual sequences.
The approach of Cao et al [CFL94a] is to have the kernel deterministically choose
the process owning the least recently used page at the time of a page fault and ask that
process to evict a page of its choice (which may be different from the least recently
used page). In Appendix A we show under the assumption that processes always
make good page replacement decisions that Cao et al's algorithm has a competitive
2. The algorithm we present in the next section and
analyze thereafter improves the competitive ratio to 2H P
Online Algorithm for Multi-application Caching
Our algorithm is an online application-controlled caching strategy for an operating
system kernel to manage a shared cache in an efficient and fair manner. We show in the
subsequent sections that the competitive ratio of our algorithm is 2H
and that it is optimal to within a factor of about 2 among all online algorithms. (If
we can use the algorithm of [FKL
On a page fault, we first choose a victim process and then ask it to evict a
suitable page. Our algorithm can detect mistakes made by application processes,
which enables us to reprimand such application processes by having them pay for
their mistakes. In our scheme, we mark pages as well as processes in a systematic
way while processing the requests that constitute a phase.
1 The global sequence of page requests is partitioned into a consecutive
sequence of phases; each phase is a sequence of page requests. At the beginning of
each phase, all pages and processes are unmarked. A page gets marked during a phase
when it is requested. A process is marked when all of its pages in cache are marked.
A new phase begins when a page is requested that is not in cache and all the pages
in cache are marked. A page accessed during a phase is called clean with respect to
that phase if it was not in the online algorithm's cache at the beginning of a phase.
A request to a clean page is called a clean page request. Each phase always begins
with a clean page request.
ONLINE ALGORITHM FOR MULTI-APPLICATION CACHING 4
Our marking scheme is similar to the one in [FKL + 91] for the classical caching
problem. However, unlike the algorithm in [FKL + 91], the algorithm we develop is a
non-marking algorithm, in the sense that our algorithm may evict marked pages. In
addition, our notion of phase in Definintion 1 is different from the notion of phase
in [FKL + 91], which can be looked upon as a special case of our more general notion.
We put the differences into perspective in Section 4.1.
Our algorithm works as follows when a page p belonging to process P r is requested:
1. If p is in cache:
(a) If p is not marked, we mark it.
(b) If process P r has no unmarked pages in cache, we mark P r .
2. If p is not in cache:
(a) If process P r is unmarked and page p is not a clean page with respect
to the ongoing phase (i.e, P r has made a mistake earlier in the phase by
evicting p) then:
i. We ask process P r to make a page replacement decision and evict one
of its pages from cache in order to bring page p into cache. We mark
page p and also mark process P r if it now has no unmarked pages in
cache.
(b) Else (process P r is marked or page p is a clean page, or both):
i. If all pages in cache are marked, we remove marks from all pages
and processes, and we start a new phase, beginning with the current
request for p.
ii. Let S denote the set of unmarked processes having pages in the cache.
We randomly choose a process P e from S, each process being chosen
with a uniform probability 1=jSj.
iii. We ask process P e to make a page replacement decision and evict one
of its pages from cache in order to bring page p into cache. We mark
page p and also mark process P e if it now has no unmarked page in
cache.
Note that in Steps 2(a)i and 2(b)iii our algorithm seeks paging decisions from
application processes that are unmarked. Consider an unmarked process P i that has
been asked to evict a page in a phase, and consider P i 's pages in cache at that time.
Let u i denote the farthest unmarked page of process P i ; that is, u i is the unmarked
page of process P i whose next request occurs furthest in the future among all of P i 's
unmarked cached pages. Note that process P i may have marked pages in cache whose
next requests occur after the request for u i .
2 The good set of an unmarked process P i at the current point in the
phase is the set consisting of its farthest unmarked page u i in cache and every marked
page of P i in cache whose next request occurs after the next request for page u i . A
page replacement decision made by an unmarked process P i in either Step 2(a)i or
Step 2(b)iii that evicts a page from its good set is regarded as a good decision with
respect to the ongoing phase. Any page from the good set of P i is a good page for
eviction purposes at the time of the decision. Any decision made by an unmarked
process P i that is not a good decision is regarded as a mistake by process P i .
If a process P i makes a mistake by evicting a certain page from cache, we can
detect the mistake made by P i if and when the same page is requested again by P i in
the same phase while P i is still unmarked.
In Sections 6 and 7 we specifically assume that application processes are always
able to make good decisions about page replacement. In Section 8 we consider fairness
properties of our algorithm in the more realistic scenario where processes can make
mistakes.
4.1 Relation to Previous Work on Classical Caching
Our marking scheme approach is inspired by a similar approach for the classical
caching problem in [FKL + 91]. However, the phases defined by our algorithm are
significantly different in nature from those in [FKL + 91]. Our phase ends when there
are k distinct marked pages in cache; more than k distinct pages may be requested in
the phase. The phases depend on the random choices made by the algorithm and are
probabilistic in nature. On the other hand, a phase defined in [FKL
exactly k distinct pages have been accessed, so that given the input request sequence,
the phases can be determined independently of the caching algorithm being used.
The definition in [FKL + 91] is suited to facilitate the analysis of online caching
algorithms that never evict marked pages, called marking algorithms. In the case
of marking algorithms, since marked pages are never evicted, as soon as k distinct
pages are requested, there are k distinct marked pages in cache. This means that the
phases determined by our definition for the special case of marking algorithms are
exactly the same as the phases determined by the definition in [FKL + 91]. Note that
our algorithm is in general not a marking algorithm since it may evict marked pages.
While marking algorithms always evict unmarked pages, our algorithm always calls
on unmarked processes to evict pages; the actual pages evicted may be marked.
5 Lower Bounds for OPT and Competitive Ratio
In this section we prove that the competitive ratio of any online caching algorithm
can be no better than minfH g. Let us denote by OPT the optimal offline
algorithm for caching that works as follows: When a page fault occurs, OPT evicts
the page whose next request is furthest in the future request sequence among all pages
in cache.
As in [FKL + 91], we will compare the number of page faults generated by our
online algorithm during a phase with the number of page faults generated by OPT
during that phase. We express the number of page fronts as a function of the number
of clean page requests during the phase. Here we state and prove a lower bound on
the (amortized) number of page faults generated by OPT in a single phase. The
proof is a simple generalization of an analogous proof in [FKL + 91], which deals only
with the deterministic phases of marking algorithms.
Lemma 1 Consider any phase oe i of our online algorithm in which ' i clean pages are
requested. Then OPT incurs an amortized cost of at least ' i =2 on the requests made
in that phase. 2
be the number of clean pages in OPT 's cache at the beginning of phase
is the number of pages requested in oe i that are in OPT 's cache but not
in our algorithm's cache at the beginning of oe i . Let d i+1 represent the same quantity
for the next phase oe i+1 . Let d dm of the d i+1 clean pages in
OPT 's cache at the beginning of oe i+1 are marked during oe i and d u of them are not
marked during oe i . Note that d
Of the ' i clean pages requested during oe i , only d i are in OPT 's cache, so OPT
generates at least ' during oe i . On the other hand, while processing
the requests in oe i , OPT cannot use d u of the cache locations, since at the beginning
of oe i+1 there are d u pages in OPT 's cache that are not marked during oe i . (These d u
pages would have to be in OPT 's cache before oe i even began.) There are k marked
pages in our algorithm's cache at the end of oe i , and there are dm other pages marked
during oe i that are out of our algorithm's cache. So the number of distinct pages
requested during oe i is at least dm k. Hence, OPT serves at least dm
corresponding to oe i without using d u of the cache locations. This means that OPT
generates at least (k during oe i . Therefore, the number
of faults OPT generates on oe i is at least
Let us consider the first j phases. In the jth phase of the sequence, OPT has
at least ' faults. In the first phase, OPT generates k faults and
k. Thus the sum of OPT 's faults over all phases is at least
where we use the fact that d 2 - Thus by definition, the amortized number of
faults OPT generates over any phase oe i is at least ' i =2.
By "amortized" in Lemma 1 we mean for each j - 1 that the number of page faults made by
OPT while serving the first j phases is at least
is the number of clean page
requests in the ith phase.
Next we will construct a lower bound for the competitive ratio of any randomized
online algorithm even when application processes have perfect knowledge of their
individual request sequences. The proof is a straightforward adaptation of the proof
of the H k lower bound for classical caching [FKL + 91]. However, in the situation at
hand, the adversary has more restrictions on the request sequence that he can use to
prove the lower bound, thereby resulting in a lowering of the lower bound.
Theorem 1 The competitive ratio of any randomized algorithm for the multi-application
caching problem is at least minfH P even if application processes
have perfect knowledge of their individual request sequences.
lower bound on the classical caching problem from [FKL
is directly applicable by considering the case where each process accesses only one
page each. This gives a lower bound of H k on competitive ratio.
In the case when P - k, we construct a multi-application caching problem based
on the nemesis sequence used in [FKL + 91] for classical caching. In [FKL + 91] a lower
bound of H k 0
is proved for the special case of a cache of size k 0 and a total of k
pages, which we denote c 1 , c 2 , . ,c k 0 +1 . All but one of the pages can fit in cache
at the same time. Our corresponding multi-application caching problem consists
of so that there is one process
corresponding to each page of the classical caching lower bound instance for a k 0 -
sized cache . Process
. The total number
pages among all the processes is k is the cache size; that is, all but one
of the pages among all the processes can fit in memory simultaneously.
In the instance of the multi-application caching problem we construct, the request
sequence for each process P i consists of repetitions of the double round-robin sequence
(1)
of length 2r i . We refer to the double round-robin sequence (1) as a touch of process P i .
When the adversary generates requests corresponding to a touch of process P i , we
say that it "touches process P i ."
Given an arbitrary adversarial sequence for the classical caching problem described
above, we construct an adversarial sequence for the multi-application caching problem
by replacing each request for page c i in the former problem by a touch of process
in the latter problem. We can transform an algorithm for this instance of
multi-application caching into one for the classical caching problem by the following
correspondence: If the multi-application algorithm evicts a page from process P j
while servicing the touch of process P i , the classical caching algorithm evicts page c j
in order to service the request to page c i . In Lemma 2 below, we show that there
is an optimum online algorithm for the above instance of multi-application caching
that never evicts a page belonging to process P i while servicing a fault on a request
for a page from process P i . Thus the transformation is valid, in that page c i is always
resident in cache after the page request to c i is serviced. This reduction immediately
implies that the competitive ratio for this instance of multi-application caching must
be at least H k 0
Lemma 2 For the above instance of multi-application caching, any online algorithm
A can be converted into an online algorithm A 0 that is at least as good in
an amortized sense and that has the property that all the pages for process P i are in
cache immediately after a touch of P i is processed.
Intuitively, the double round-robin sequences force an optimal online algorithm
to service the touch of a process by evicting a page belonging to another pro-
cess. We construct online algorithm A 0 from A in an online manner. Suppose that
both A and A 0 fault during a touch of process P i . If algorithm A evicts a page of P j ,
for some j 6= i, then A 0 does the same. If algorithm A evicts a page of P i during the
first round-robin while servicing a touch of P i , then there will be a page fault during
the second round-robin. If A then evicts a page of another process during the second
round-robin, then A 0 evicts that page during the first round-robin and incurs no fault
during the second round-robin. The first page fault of A was wasted; the other page
could have been evicted instead during the first round-robin. If instead A evicts another
page of P i during the second round-robin, then A 0 evicts an arbitrary page of
another process during the first round-robin, and A 0 incurs no page fault during the
second round-robin. Thus, if A evicts a page of P i , it incurs at least one more page
fault than does A 0 .
If A faults during a touch of P i , but A 0 doesn't, there is no paging decision for A 0
to make. If A does not fault during a touch of P i , but A 0 does fault, then A 0 evicts
the page that is not in A's cache. The page fault for A 0 is charged to the extra page
fault that A incurred earlier when A 0 evicted one of P i 's pages.
Thus the number of page faults that A 0 incurs is no more than the number of page
faults that A incurs. By construction, all pages of process P i are in algorithm A 0 's
cache immediately after a touch of process P i .
The double round-robin sequences in the above reduction can be replaced by single
round-robin sequences by redoing the explicit lower bound argument of [FKL
6 Holes
In this section, we introduce the notion of holes, which plays a key role in the analysis
of our online caching algorithm. In Section 6.2, we mention some crucial properties of
holes of our algorithm under the assumption that applications always make good page
replacement decisions. These properties are also useful in bounding the page faults
that can occur in a phase when applications make mistakes in their page replacement
decisions.
Definition 3 The eviction of a cached page at the time of a page fault on a clean
page request is said to create a hole at the evicted page. Intuitively, a hole is the lack
of space for some page, so that that page's place in cache contains a hole and not the
page. If page p 1 is evicted for servicing the clean page request, page p 1 is said to be
associated with the hole. If page p 1 is subsequently requested and another page p 2
is evicted to service the request, the hole is said to move to p 2 , and now p 2 is said
to be associated with the hole. And so on, until the end of the phase. We say that
hole h moves to process P i to mean that the hole h moves to some page p belonging
to process P i .
6.1 General observations about holes
All requests to clean pages during a phase are page faults and create holes. The
number of holes created during a particular phase equals the number of clean pages
requested during that phase. Apart from clean page requests, requests to holes also
cause page faults to occur. By a request to a hole we mean a request for the page
associated with that hole. As we proceed down the request sequence during a phase,
the page associated with a particular hole varies with time. Consider a hole h that
is created at a page p 1 that is evicted to serve a request for clean page p c . When a
request is made for page p 1 , some page p 2 is evicted, and h moves to p 2 . Similarly
when page p 2 is requested, h moves to some p 3 , and so on. Let
the temporal sequence of pages all associated with hole h in a particular phase such
that page p 1 is evicted when clean page p c is requested, page
evicted when p i\Gamma1 is requested and the request for p m falls in the next phase. Then
the number of faults incurred in the particular phase being considered due to requests
to h is m \Gamma 1.
6.2 Useful properties of holes
In this section we make the following observations about holes under the assumption
that application processes make only good decisions.
Lemma 3 Let u i be the farthest unmarked page in cache of process P i at some point
in a phase. Then process P i is a marked process by the time the request for page u i
is served.
This follows from the definition of farthest unmarked page and the nature of
the marking scheme employed in our algorithm.
Lemma 4 Suppose that there is a request for page p i , which is associated with hole h.
Suppose that process P i owns page p i . Then process P i is already marked at the time
of the present request for page p i .
associated with hole h because process P i evicted page p i when
asked to make a page replacement decision, in order to serve either a clean request or
a page fault at the previous page associated with h. In either case, page p i was a good
page at the time process P i made the particular paging decision. Since process P i was
unmarked at the time the decision was made, p i was either the farthest unmarked
page of process P i then or some marked page of process P i whose next request is after
the request for P i 's farthest unmarked page. By Lemma 3, process P i is a marked
process at the time of the request for page p i .
Lemma 5 Suppose that page p i is associated with hole h. Let P i denote the process
owning page p i . Suppose page p i is requested at some time during the phase. Then
hole h does not move to process P i subsequently during the current phase.
Proof : The hole h belongs to process P i . By Lemma 4 when a request is made to h,
marked and will remain marked until the end of the phase. Since only
unmarked processes are chosen to to evict pages, a request for h thereafter cannot
result in eviction of any page belonging to P i , so a hole can never move to a process
more than once.
Let there be R unmarked processes at the time of a request to a hole h. For
any unmarked process denote the farthest unmarked page of
process P j at the time of the request to hole h. Without loss of generality, let us
relabel the processes so that
is the temporal order of the first subsequent appearance of the pages u j in the global
page request sequence.
Lemma 6 In the situation described in (2) above, suppose during the page request
for hole h that the hole moves to a good page p i of unmarked process P i to serve the
current request for h. Then h can never move to any of the processes
during the current phase.
Proof : The first subsequent request for the good page p i that P i evicts, by definition,
must be the same as or must be after the first subsequent request for the farthest unmarked
page u i . So process P i will be marked by the next time hole h is requested, by
Lemma 4. On the other hand, the first subsequent requests of the respective farthest
unmarked pages u 1 , . , u appear before that of page u i . Thus, by Lemma 3, the
are already marked before the next time hole h (page
gets requested and will remain marked for the remainder of the phase. Hence, by the
fact that only unmarked processes get chosen, hole h can never move to any of the
7 Competitive Analysis of our Online Algorithm
Our main result is Theorem 2, which states that our online algorithm for the multi-application
caching problem is roughly 2 ln P-competitive, assuming application processes
always make good decisions (e.g., if each process knows its own future page
requests). By the lower bound of Theorem 1, it follows that our algorithm is optimal
in terms of competitive ratio up to a factor of 2.
COMPETITIVE ANALYSIS OF OUR ONLINE ALGORITHM 11
Theorem 2 The competitive ratio of our online algorithm in Section 4 for the multi-application
caching problem, assuming that good evictions are always made, is at most
2. Our competitive ratio is within a factor of about 2 of the best possible
competitive ratio for this problem.
The rest of this section is devoted to proving Theorem 2. To count the number of
faults generated by our algorithm in a phase, we make use of the properties of holes
from the previous section. If ' requests are made to clean pages during a phase, there
are ' holes that move about during the phase. We can count the number of faults
generated by our algorithm during the phase as
where N i is the number of times hole h i is requested during the phase. Assuming
good decisions are always made, we will now prove for each phase and for any hole h i
that the expected value of N i is bounded by H P \Gamma1 .
Consider the first request to a hole h during the phase. Let R h be the number of
unmarked processes at that point of time. Let CR h
be the random variable associated
with the number of page faults due to requests to hole h during the phase.
Lemma 7 The expected number E(CR h
of page faults due to requests to hole h is at
most HR h
We prove this by induction over R h . We have E(C
Suppose for . Using the same terminology and
notation as in Lemma 6, let the farthest unmarked pages of the R h unmarked processes
at the time of the request for h appear in the temporal order
in the global request sequence. We renumber the R h unmarked processes for convenience
so that page u i is the farthest unmarked page of unmarked process P i .
When the hole h is requested, our algorithm randomly chooses one of the R h
unmarked processes, say, process P i , and asks process P i to evict a suitable page.
Under our assumption, the hole h moves to some good page p i of process P i . From
Lemmas 5 and 6, if our algorithm chooses unmarked process P i so that its good
page p i is evicted, then at most R h \Gamma i processes remain unmarked the next time h is
requested. Since each of the R h unmarked processes is chosen with a probability of
1=R h , we have
E(CR h
R h
E(CR h \Gammai )
R h
APPLICATION-CONTROLLED CACHING WITH FAIRNESS 12
The last equality follows easily by induction and algebraic manipulations.
Now let us complete the proof of Theorem 2. By Lemma 4 the maximum possible
number of unmarked processes at the time a hole h is first requested is
implies that the average number of times any hole can be requested during a phase
is bounded by H P \Gamma1 . By (3), the total number of page faults during the phase is
at most '(1 We have already shown in Lemma 1 that the OPT algorithm
incurs an amortized cost of at least '=2 for the requests made in the phase. Therefore,
the competitive ratio of our algorithm is bounded by '(1 +H P 2.
Applying the lower bound of Theorem 1 completes the proof.
Application-Controlled Caching with Fairness
In this section we analyze our algorithm's performance in the realistic scenario where
application processes can make mistakes, as defined in Definition 2. We bound the
number of page faults it incurs in a phase in terms of page faults caused by mistakes
made by application processes during that phase. The main idea here is that if an
application process P i commits a mistake by evicting a certain page p and then during
the same phase requests page p while process P i is still unmarked, our algorithm makes
process pay for the mistake in Step 2(a)i.
On the other hand, if page p's eviction from process P i was a mistake, but process
marked when page p is later requested in the same phase, say, at time t,
then process P i 's mistake is ``not worth detecting'' for the following reason: Since
evicting page p was a mistake, it must mean that at the time t 1 of p's eviction, there
existed a set U of one or more unmarked pages of process P i in cache whose subsequent
requests appear after the next request for page p. Process P i is marked at the
time of the next request for p, implying that all pages in U were evicted by P i at
some times t 2 , t 3 , . t jU j+1 after the mistake of evicting p. If instead at time t 1 , t 2 ,
. t jU j+1 process P i makes the specific good paging decisions of evicting the farthest
unmarked pages, the same set fpg [ U of pages will be out cache at time t. In our
notion of fairness we choose to ignore all such mistakes and consider them "not worth
detecting."
Definition 4 During an ongoing phase, any page fault corresponding to a request
for a page p of an unmarked process P i is called an unfair fault if the request for
page p is not a clean page request. All faults during the phase that are not unfair are
called faults.
The unfair faults are precisely those page faults which are caused by mistakes
considered "worth detecting." We state the following two lemmas that follow trivially
from the definitions of mistakes, good decisions, unfair faults, and fair faults.
APPLICATION-CONTROLLED CACHING WITH FAIRNESS 13
Lemma 8 During a phase, all page requests that get processed in Step 2(a)i of our
algorithm are precisely the unfair faults of that phase. That is, unfair faults correspond
to mistakes that get caught in Step 2(a)i of our algorithm.
Lemma 9 All fair faults are precisely those requests that get processed in Step 2(b)iii.
We now consider the behavior of holes in the current mistake-prone scenario.
The number of holes in a phase equals the number of clean pages requested
in the phase.
Lemma 11 Consider a hole h associated with a page p of a process P i . If a request
for h is an unfair fault, process P i is still unmarked and the hole h moves to some
other page belonging to process P i . If a request for hole h is a fair fault, then process P i
is already marked and the hole h can never move to process P i subsequently during
the phase.
If the request for hole h is an unfair fault, then by definition process P i is
unmarked and by Lemma 8, h moves to some other page p 0 of process P i . If the
request for h is a fair fault, then by definition and the fact that the request for h is
not a clean page request, process P i is marked. Since our algorithm never chooses a
marked process for eviction, it follows that h can never visit process P i subsequently
during the phase.
During a phase, a hole h is created in some process, say P 1 , by some clean page
request. It then moves around zero or more times within process P 1 on account of
mistakes, until a request for hole h is a fair fault, upon which it moves to some
other process P 2 , never to come back to process P 1 during the phase. It behaves
similarly in process P 2 , and so on up to the end of the phase. Let T h denote the total
number of faults attributed to requests to hole h during a phase, of which F h faults
are fair faults and U h faults are unfair faults. We have
By Lemma 11 and the same proof techniques as those in the proofs of Lemma 7
and Theorem 2, we can prove the following key lemma:
Lemma 12 The expected number E(F h ) of page requests to hole h during a phase
that result in fair faults is at most H P \Gamma1 .
By Lemma 10, our algorithm incurs at most '+
page faults in a phase with
clean page requests. The expected value of this quantity is at most '(H P
, by Lemma 12.
The expression
is the number of unfair faults, that is, the number of
mistakes considered "worth detecting." Our algorithm is very efficient in that the
number of unfair faults is an additive term. For any phase OE with ' clean requests,
we denote
as M OE .
9 CONCLUSIONS 14
Theorem 3 The number of faults in a phase OE with ' clean page requests and M OE
unfair faults is bounded by '(1 . At the time of each of the M OE unfair
faults, the application process that makes the mistake that causes the fault must evict
a page from its own cache. No application process is ever asked to evict a page to
service an unfair fault caused by some other application process.
9 Conclusions
Cache management strategies are of prime importance for high performance comput-
ing. We consider the case where there are P independent processes running on the
same computer system and sharing a common cache of size k. Applications often have
advance knowledge of their page request sequences. In this paper we have address the
issue of exploiting this advance knowledge to devise intelligent strategies to manage
the shared cache, in a theoretical setting. We have presented a simple and elegant
application-controlled caching algorithm for the multi-application caching problem
that achieves a competitive ratio of 2H 2. Our result is a significant improvement
over the competitive ratios of 2P multi-application caching
and \Theta(H k ) for classical caching, since the cache size k is often orders of magnitude
greater than P . We have proven that no online algorithm for this problem can have a
competitive ratio smaller than minfH even if application processes have perfect
knowledge of individual request sequences. We conjecture that an upper bound
of H P \Gamma1 can be proven, up to second order terms, perhaps using techniques from
[MS91], although the resulting algorithm is not likely to be practical.
Using our notion of mistakes we are able to consider a more realistic setting when
application processes make bad paging decisions and show that our algorithm is a fair
and efficient algorithm in such a situation. No application needs to pay for some other
application process's mistake, and we can bound the global caching performance of our
algorithm in terms of the number of mistakes. Our notions of good page replacement
decisions, mistakes, and fairness in this context are new.
One related area of possible future work is to consider alternative models to our
model of worst-case interleaving. Another interesting area would be consider caching
in a situation where some applications have good knowledge of future page requests
while other applications have no knowledge of future requests. We could also consider
pages shared among application processes.
--R
A study of replacement algorithms for virtual storage com- puters
Competitive pag- A THE CAO
Implementation and performance of application-contolled file caching
On competitive algorithms for paging problems.
Markov paging.
A strongly competitive randomized paging algorithm.
Amortized efficiency of list update and paging rules.
--TR
--CTR
Guy E. Blelloch , Phillip B. Gibbons, Effectively sharing a cache among threads, Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures, June 27-30, 2004, Barcelona, Spain | application-controlled;competitive;online;caching;randomized |
345884 | Minimizing Expected Loss of Hedging in Incomplete and Constrained Markets. | We study the problem of minimizing the expected discounted loss $$ E\left[e^{-\int_0^Tr(u)du}( C- X^{x,\pi}(T))^+\right] $$ when hedging a liability C at time t=T, using an admissible portfolio strategy $\pi(\cdot)$ and starting with initial wealth x. The existence of an optimal solution is established in the context of continuous-time Ito process incomplete market models, by studying an appropriate dual problem. It is shown that the optimal strategy is of the form of a knock-out option with payoff C, where the "domain of the knock-out" depends on the value of the optimal dual variable. We also discuss a dynamic measure for the risk associated with the liability C, defined as the supremum over different scenarios of the minimal expected loss of hedging C. | Introduction
In a complete financial market which is free of arbitrage opportunities, any sufficiently integrable
random payoff (contingent claim) C, whose value has to be delivered and is known
at time can be hedged perfectly: starting with a large enough initial capital x, an
agent can find a trading strategy - that will allow his wealth X x;- (\Delta) to hedge the liability
C without risk at time that is
while maintaining "solvency" throughout [0; T ]. (For an overview of standard results in
complete and some incomplete markets in continuous-time, Ito processes models, see, for
example, Cvitani'c 1997). This is either no longer possible or too expensive to accomplish in
a market which is incomplete due to various "market frictions", such as: insufficient number
of assets available for investment, transaction costs, portfolio constraints, problems with
liquidity, presence of a "large investor", and so on. In this paper we concentrate on the
case in which incompleteness arises due to some assets not being available for investment,
and the more general case of portfolio constraints. Popular approaches to the problem of
hedging a claim C in such contexts have been to either maximize the expected utility of the
difference \GammaD := X x;- (T minimize the risk of D. In particular, one of the most
studied approaches is to minimize E[D 2 ], so-called quadratic hedging of F-ollmer-Schweizer-
(for recent results and references see Pham, Rheinlaender and Schweizer 1996,
for example). An obvious disadvantage of this approach is that one is penalized for high
profits, and not just high losses. On the other hand, Artzner, Delbaen, Eber and Heath
have shown in a static hedging setting that the only measure of risk that satisfies
certain natural "coherence" properties is of the type E[ -
(or a supremum of these over
a set of probability measures), where -
is the discounted value of the positive part of
D. Motivated by this work, Cvitani'c and Karatzas (1998) solve the problem of minimizing
in a context of a complete continuous-time Ito process model for the financial market.
We solve in this paper the same problem in a more difficult context of incomplete or constrained
markets. Recently, Pham (1998) has solved the problem of minimizing E[(D
discrete-time models, and under cone constraints. Moreover, independenly
from Pham and the present paper, F-ollmer and Leukert (1998b) analyze the problem of
minimizing loss function l and in general incomplete semimartingale
models, emphasizing the Neyman-Pearson lemma approach, as opposed to the duality ap-
proach. The former approach was used by the same authors in F-ollmer and Leukert (1998a)
to solve the problem of maximizing the probability of perfect hedge P [D - 0]. Some early
work on problems like these is presented in Dembo (1997), in a one-period setting. A very
general study of the the duality approach and its use in the utility meximization context
can be found in Kramkov and Schachermayer (1997).
Suppose now that, in addition to the genuine risk that the liability C represents, the
agent also faces some uncertainty regarding the model for the financial market itself. Following
Cvitani'c and Karatzas (1998), we capture such uncertainty by allowing a family
P of possible "real world probability measures", instead of just one measure. Thus, the
"max-min" quantity
represents the maximal risk that the agent can encounter, when faced with the "worst
possible scenario" P 2 P: In the special case of incomplete markets and under the condition
that all equivalent martingale measures are included in the set of possible real-world measures
P, we show that
sup
In other words, the corresponding fictitious "stochastic game" between the market and the
agent has a value. The trading strategy attaining this value is shown to be the one that
corresponds to borrowing just enough money from the bank at time as to be able to
have at least the amount C at time
We describe the market model in Section 2, and introduce the optimization problem
in Section 3. As is by now standard in financial mathematics, we define a dual problem,
whose optimal solution determines the optimal terminal wealth X x;- (T ). It turns out that
this terminal wealth is of the "knock-out" option type - namely, it is either equal to C or
to 0 or to a certain (random) value depending on whether the optimal dual
variable is less than, larger than, or equal to one, respectively. What makes the dual problem
more difficult than in the usual utility optimization problems (as in Cvitani'c and Karatzas
1992) is that the objective function fails to be everywhere differentiable, and the optimal
dual variable (related to the Radon-Nikodym derivative of an "optimal change of measure")
can be zero with positive probability. Nevertheless, we are able to solve the problem using
nonsmooth optimization techniques for infinite dimensional problems, which can be found
in Aubin and Ekeland (1984). We discuss in Section 4 the stochastic game associated with
(1.2) and (1.3).
2 The Market Model
We recall here the standard, Ito processes model for a financial market M. It consists of one
bank account and d stocks. Price processes S 0 (\Delta) and S 1 of these instruments
are modeled by the equations
d
Here is a standard d\Gammadimensional Brownian motion on a complete
probability
endowed with a filtration
augmentation of F W (t) := oe(W the filtration generated
by the Brownian motion W (\Delta). The coefficients r(\Delta) (interest rate),
(vector of stock return rates) and 1-i;j-d (matrix of stock-volatilities) of the
model M, are all assumed to be progressively measurable with respect to F. Furthermore,
the matrix oe(\Delta) is assumed to be invertible, and all processes r(\Delta), b(\Delta), oe(\Delta), oe \Gamma1 (\Delta) are
assumed to be bounded, uniformly in (t; !) 2 [0; T
The "risk premium" process
bounded and F\Gammaprogressively measurable. Therefore,
the process
ds
is a P \Gammamartingale, and
is a probability measure equivalent to P on F(T ). Under this risk-neutral equivalent martingale
measure P 0 , the discounted stock prices S 1 (\Delta)
become martingales, and the
process
becomes Brownian motion, by the Girsanov theorem.
Consider now an agent who starts out with initial capital x and can decide, at each time
proportion - i (t) of his (nonnegative) wealth to invest in each of the stocks
d. However, the portfolio process (- 1 has to take values in a given
closed convex set K ae R d of constraints, for a.e. t 2 [0; T ], almost surely. We will also
assume that K contains the origin. For example, if the agent cannot hold neither short
nor long positions in the last stocks we get a typical example of
an incomplete market, in the sense that not all square-integrable payoffs can be exactly
replicated. (One of the best known examples of incomplete markets, the case of stochastic
volatility, is included in this framework). Another typical example is the case of an agent
who has limits on how much he can borrow from the bank, or how much he can go short or
long in a particular stock.
chosen, the agent invests the amount
in the bank account, at time t, where we have denoted X(\Delta) j X x;- (\Delta) his
wealth process. Moreover, for reasons of mathematical convenience, we allow the agent to
spend money outside of the market, and -(\Delta) - 0 denotes the corresponding cumulative
consumption process. The resulting wealth process satisfies the equation
d
d
d
Denoting
R tr(u)du X(t); (2.6)
the discounted version of a process X(\Delta), we get the equivalent equation
It follows that -
X(\Delta) is a nonnegative local P 0 \Gammasupermartingale, hence also a P 0 \Gammasupermartingale,
by Fatou's lemma. Therefore, if - 0 is defined to be the first time it hits zero, we have
so that the portfolio values -(t) are irrelevant after that happens. Accordingly,
we can and do set
More formally, we have
Definition 2.1 (i) A portfolio process
F\Gammaprogressively measurable
and satisfies
as well as
almost surely. A consumption process -(\Delta) is a nonnegative, nondecreasing, progressively
measurable process with RCLL paths, with
(ii) For a given portfolio and consumption processes -(\Delta), -(\Delta), the process X(\Delta) j
defined by (2.7) is called the wealth process corresponding to strategy (-) and
initial capital x.
(iii) A portfolio-consumption process pair (-(\Delta); -(\Delta)) is called admissible for the initial
capital x, and we write (-) 2 A(x), if
holds almost surely.
We refer to the lower bound of (2.9) as a margin requirement. The no-arbitrage price
of a contingent claim C in a complete market is unique, and is obtained by multiplying
("discounting") the claim by H taking expectation. Since the
market here is incomplete, there are more relevant stochastic discount factors other than
along the lines of Cvitani'c and Karatzas (1993), hereafter [CK93],
and Karatzas and Kou (1996), hereafter [KK96], as follows: Introduce the support function
of the set \GammaK , as well as its barrier cone
~
For the rest of the paper we assume the following mild conditions.
Assumption 2.1 The closed convex set K ae R d contains the origin; in other words, the
agent is allowed not to invest in stocks at all. In particular, ffi(\Delta) - 0 on ~
K. Moreover, the
set K is such that ffi(\Delta) is continuous on the barrier cone ~
K of (2.11).
Denote by D the set of all bounded progressively measurable process -(\Delta) taking values in ~
a.e.
on\Omega \Theta [0; T ]. In analogy with (2.2)-(2.5), introduce
ds
a P - \GammaBrownian motion. Also denote
Note that
From this and (2.7) we get, by Ito's rule,
for all - 2 D. Therefore, H - (\Delta) -
X(\Delta) is a P \Gammalocal supermartingale (note that ffi(- 0 - 0
K), and from (2.9) thus also a P \Gammasupermartingale, by Fatou's lemma.
Consequently,
3 The minimization problem and its dual
Suppose now that, at time the agent has to deliver a payoff given by a contingent
claim C, a random variable in L
Introduce a (possibly infinite) process
ess sup
almost surely, the discounted version of the process
C(\Delta)
We have denoted
the discounted value of the F(T )\Gamma measurable random variable C. We impose the following
assumption, throughout the rest of the paper (see Remark 3.3 for a discussion on the
relevance of this assumption).
Assumption 3.1 We assume
The following theorem is taken from the literature on constrained financial markets (see, for
example, [CK93], [KK96], or Cvitani'c (1997)).
Theorem 3.1 (Cvitani'c and Karatzas 1993). Let C - 0 be a given contingent claim. Under
Assumption 3.1, the process C(\Delta) of (3.3) is finite, and it is equal to the minimal admissible
wealth process hedging the claim C. More precisely, there exists a pair (- C
such that
and, if for some x - 0 and some pair (-) 2 A(x) we have
then
Consequently, if x - C(0) there exists then an admissible pair (-) 2 A(x) such that
Achieving a "hedge without risk" is not possible for x ! C(0). Motivated
by results of Artzner et al. (1996) (and similarly as in a complete market setting of Cvitani'c
and Karatzas 1998) we choose the following risk function to be minimized:
In other words, we are minimizing the expected discounted net loss, over all admissible
trading strategies.
above, we can find a wealth
process that hedges C. Moreover, the margin requirement (2.9) implies that x - 0, so we
assume from now on that
Note that we can (and do) assume X x;- (T our optimization problem
(3.8), since the agent can always consume down to the value of C, in case he has more than
C at time T . In particular, if 0, we can (and do) assume X x;- (T;
means that the set not relevant for the problem (3.8), which motivates
us to define a new probability measure
(see also Remark 3.3 (ii)). Denote by E C the associated expectation operator.
The problem (3.8) has then an equivalent formulation
We approach the problem (3.11) by recalling familiar tools of convex duality: starting
with the convex loss function its Legendre-Fenchel transform
~
(where z The minimum in (3.12) is attained by any number I(z; b) of the
Consequently, denoting
we conclude from (3.12) that for any initial capital x 2 (0; C(0)) and any (-) 2 A(x),
Thus, multiplying by E[ -
C], taking expectations and in conjunction with (2.20), we obtain
This is a type of a duality relationship that has proved to be very useful in constrained
portfolio optimization studied in Cvitani'c and Karatzas (1993). The difference here is that
we have to extend it to the random variables in the set
It is clear that H is a convex set. It is also closed in L
in L 1
exists a (relabeled) subsequence fH n g n2N converging to H
C]E C [HY x;- x, for all
By Theorem 3.1 we have Consequently,
we have Y C(0);- C ;-
where we extend a random variable H to the probability
on 0g. Similarly, since 0 2 K, taking -
in the definition (3.17) of H, we
see that
Moreover, since E[ -
D, and by (2.20), we get
Remark 3.1 The idea of introducing the set H is similar to and inspired by the approach
of Kramkov and Schachermayer (1998), who work with the set of all nonnegative processes
G(\Delta) such that G(\Delta) -
X(\Delta) is a P \Gammasupermartingale for all admissible wealth processes X(\Delta).
Next, arguing as above (when deducing (3.16)), we obtain
~
where we have denoted
~
It is easily seen that \Gamma ~
R is a convex, lower-semicontinuous
and proper functional, in the terminology of convex analysis; see, for example, Aubin and
Ekeland (1984), henceforth [AE84].
Remark 3.2 It is straightforward to see that the inequality of (3.21) holds as equality for
some (-
z - 0, -
only if we have
and
for some F(T )\Gammameasurable random variable -
B that satisfies 0 -
a.s. We also
set
If (3.23) and (3.24) are satisfied, then (-
-) is optimal for the problem (3.11), under the
"change of variables" (3.14), since the lower bound of (3.21) is attained. Moreover, -
is optimal for the auxiliary dual problem
~
~
If we let
the conditions (3.23) and (3.24) become
and
H=1g
for some F(T )\Gammameasurable random variable -
B that satisfies 0 -
is the terminal wealth of the strategy (-) which is optimal for the problem (3.8).In light of the preceding remark, our approach will be the following: we will try to find
a solution -
H to the auxiliary dual problem (3.25), a number -
z ? 0, a random variable -
as above, and a pair (-) 2 A(x) such that (3.23) and (3.24) (or, equivalently, (3.27) and
are satisfied.
Theorem 3.2 For any given z ? 0, there exists an optimal solution -
for the
auxiliary dual problem (3.25).
Proof: Let H n 2 H be a sequence that attains the supremum in (3.25), so that
~
Note that, by (3.18), H is a bounded set in L
so that by Koml'os theorem
(see Schwartz 1986, for example) there exists a random variable -
a (relabeled) subsequence fH i g i2N such that
Fatou's lemma then implies -
by the Dominated Convergence
Theorem and concavity of ~
J(\Delta; z) we get
~
~
J(n
~
Thus, -
Lemma 3.1 The function ~
V (z) is continuous on [0; 1).
Proof: Let H 2 H and assume first z 1 ; z 2 ? 0. We have
~
Taking the supremum over H 2 H we get ~
do the same while interchanging the roles of z 1 and z 2 , we have shown continuity on (0; 1).
To prove continuity at z note that, by duality and (3.19), we have
~
for all z 1 ? 0; y ? 0. Choosing first y large enough and then z 1 small enough, we can make
the two terms on the right-hand side arbitrarily close to zero, uniformly in H 2 H.Proposition 3.1 For every
that attains the
supremum sup z-0 [ ~
Proof: Denote
Note that first show that
lim sup
so that the supremum of ff(z) over [0; 1) cannot be attained at z = 1. Suppose, on the
contrary, that there exists a sequence z n !1 such that lim n ff(z n
the optimal dual variable of Theorem 3.2 corresponding to z = z n . We have then
z n
by Dominated Convergence Theorem, a contradiction.
Consequently, being continuous by Lemma 3.1, function ff(z) either attains its supremum
at some -
z ? 0, or else ff(z) - Suppose that the latter is true. We
have then
~
z
z
for all z ? 0 and H 2 H. In particular, we can use the Dominated Convergence Theorem
while letting z ! 0 to get
for all H 2 HD . Taking the supremum over H 2 HD we obtain x - C(0), a contradiction
again.Denote -
z the optimal dual variable for problem (3.25), corresponding to
z of
Proposition 3.1. We want to show that there exists an F(T )\Gammameasurable random variable
such that the optimal wealth for the primal problem is given by CI(-z -
B),
where I(z; b) is given in (3.13). In order to do that, we recall some notions and results from
convex analysis, as presented, for example, in [AE84].
First, introduce the space
with the norm
and its subset
It is easily seen that G is convex, by the convexity of H. It is also closed in L. Indeed, if
we are given subsequences z n - 0 and H n 2 H such that (z n H in L, then
also have, from (3.18),
so that zH n ! Z in L
and we are done. If
z ? 0, we get H n ! Z=z in L
closed in
and we are done again. The closedness of G has been confirmed.
We now define a functional ~
~
It is easy to check that ~
U is convex, lower-semicontinuous and proper on L. Moreover, since
we have
~
from Proposition 3.1, and in the notation of Theorem 3.2, it follows that the pair -
G :=
optimal for the dual problem
~
Let L := L
R be the dual space to L and let N(-z -
z) be the normal cone
to the set G at the point (-z -
z), given by
by Proposition 4.1.4 in [AE84]. Let @ ~
z) denote the subdifferential of ~
U at (-z -
z),
which, by Proposition 4.3.3 in [AE84], is given by
@ ~
Then, by Corollary 4.6.3 in [AE84], since (-z -
z) is optimal for the problem (3.35), we obtain
Proposition 3.2 The pair (-z -
G is a solution to
In other words, there exists a pair ( -
which belongs to the normal cone N(-z -
and such that \Gamma( -
belongs to the subdifferential @ ~
z).
From (3.36) and (3.37), this is equivalent to
and
It is clear from (3.40) (by letting z ! \Sigma1 while keeping Z fixed) that necessarily
On the other hand, if we let - z = z in (3.39), we get
Moreover, letting
H in (3.39), and recalling -
we obtain
H]:
Similarly, we get the reverse inequality by letting -
H in (3.39) (recall
that -
z ? 0 by Proposition 3.1), to obtain finally
This last equality will correspond to (3.23) with -
if we can show the following
result and recall (3.14).
Proposition 3.3 There exists an admissible pair (-
and such that (3.27) is satisfied.
(Here we set -
Proof: This follows immediately from (3.41) and (3.42), which can be written as
Y H]
(with 0g). Indeed, Theorem 3.1 tells us that the right-hand side is no
smaller than the minimal amount of initial capital needed to hedge C -
there exists a
that does the hedge.In order to "close the loop", it only remains to show (3.24).
Proposition 3.4 Let \Gamma (Y; y) 2 @ ~
Y is of the form
for some F(T )\Gammameasurable random variable B that satisfies a.s.
Proof: We have already seen that y = \Gammax. Define a random variable A by
From (3.40) with -
be such that
Then,
by (3.45). This implies
A - 0 on f-z -
for otherwise we could make Z arbitrarily small (respectively, large) on f-z -
(respectively, on f-z -
to get a contradiction in (3.46).
Suppose now that P C [A ! 0; - z -
There exists then
beacuse of (3.47). For a given " ? 0, let
on f-z -
in (3.46). This gives
The left-hand side is greater than
H!1g
contradiction to (3.48). Thus, we have shown
Going back to (3.46), this implies
for all Z 2 L
If we set now
we get from (3.50) and (3.47)
Using (3.49) and (3.51) in (3.45), we obtain
Suppose now that P C [A ?
z -
There exists then
(for a given " ? 0), (3.52) implies
The left-hand side is greater than ffi +P C [-z -
so that from (3.53) we conclude
contradiction. Therefore,
Together with (3.44), (3.47), (3.49) and (3.51), this completes the proof.We now state the main result of the paper.
Theorem 3.3 For any initial wealth x with there exists an optimal pair
for the problem (3.8) of minimizing the expected loss of hedging the claim C.
It can be taken as that strategy for which the terminal wealth X x;- (T ) is given by (3.28),
i.e.,
H=1g
Here (-z; -
H) is an optimal solution for the dual problem (3.35), and -
B can be taken as the
random variable B in Proposition 3.4, with (Y; y) replaced by some ( -
z)g, which exists by Proposition 3.2.
Proof: It follows from Remark 3.2. Indeed, it was observed in that remark that a pair
is optimal for the problem (3.8) if it satisfies (3.27) and (3.55) for some
F(T )\Gammameasurable random variable -
z - 0, -
The existence of
such a pair (-
established in Proposition 3.3 in conjunction with Proposition
3.4, with -
B, -
z and -
H as in the statement of the theorem. 2
The following simple example is mathematically interesting from several points of view.
It shows that the optimal dual variable -
H can be equal to zero with positive probabil-
ity, unlike the case of classical utility maximization under constraints (as in Cvitani'c and
Karatzas 1992). Moreover, - z -
H can be equal to one with positive probability, so that the
use of nonsmooth optimization techniques and subdifferentials for the dual problem is really
necessary. It also shows why it can be mathematically convenient to allow nonzero consump-
tion. Finally, it confirms that condition (3.5) is not always necessary for the dual approach
to work.
Example 3.1 Suppose r(\Delta) j 0 for simplicity, and let C - 0 be any contingent claim such
that P [C - x] ? 0. We consider the trivial primal problem for which so that
there is only one possible admissible portfolio strategy -(\Delta) j 0 (in other words, the agent
can invest only in the riskless asset). We do not assume condition (3.5), which, for these
constraints, is equivalent to C being bounded. It is clear that the value V (x) of the primal
problem is duality implies
for all z - 0, H 2 H (see (3.21)). Here we can take H to be the set of all nonnegative
random variables such that E[H] - 1. Let - z := P [C - x] ? 0 and -
z -
. It is then
easily checked that -
and that the pair ( -
z) attains equality in (3.56), so that the
optimal for the dual problem (3.35). One possible choice for the optimal
terminal wealth is
According to (3.55), this corresponds to -
while -(T
Remark 3.3 (i) Assumption 3.1 is satisfied, for example, if C is bounded. We need it
in order to get existence for the dual problem (3.35), due to our use of Koml'os theorem.
Example 3.1 shows that this assumption is not always necessary: in this example the dual
problem has a solution and there is no gap between the primal and the dual problem, even
when (3.5) is not satisfied.
(ii) If we, in fact, assumed that C is bounded, the switch to the equivalent formulation
(3.11) from (3.8) would not be necessary. (The reason for this is, the dual spaces of
are then the same, up to the equivalence class determined
by the set
Remark 3.4 Numerical approximations. Suppose that we have a Markovian model in
which r(t; S(t)), b(t; S(t)) and oe(t; S(t)) are deterministic and "nice" functions of time and
current stock prices, and so is the claim could then imagine doing
the following three-step approximation procedure to solve first the dual and then the primal
problem: First, in order to have differentiability rather than having to deal with subdifferen-
tials, one could replace the loss function with the function R p
for some p ? 1, as in Pham (1998). Second, in order to be able to use standard dynamic
programming and Hamilton-Jacobi-Bellman partial differential equations (HJB PDEs), one
could replace the auxiliary dual problem (3.25) by the approximating problem
~
~
for some large n 0 , where D n consists of those elements of D which are bounded by n, almost
surely, and where ~
J p corresponds to the dual problem associated with the loss function
R p (y). After the approximate optimal dual variable -
corresponding
z
(the one maximizing ~
are found, one has to hedge, under portfolio constraints
given by set K, the claim
where I p (\Delta) corresponds to the function I(\Delta; b) of (3:13) in the case of the loss function being
R p (\Delta). If we are in a Black-Scholes model with r, b and oe constant, the (constrained) strategy
for hedging X n 0
(T ) can be found quite easily, using the results of Broadie, Cvitani'c and Soner
(1998). Otherwise, one again has to use approximating HJB PDEs to calculate the values
of the aproximate discounted wealth process -
defined analogously to (3.2), with C
replaced by X n0 (T ) and D replaced by Dm , for some large m (see section 8 of [CK93]).
We plan to investigate properties of above described numerical approximations elsewhere.
4 Dynamic measures of risk
Suppose now that we are not quite sure whether our subjective probability measure P is
equal to the real world measure. We would like to measure the risk of hedging the claim
C under constraints given by set K, and under uncertainty about the real world measure.
According to Artzner et al. (1996), and Cvitani'c and Karatzas (1998), it makes sense to
consider the following quantities as the lower and upper bounds for the measure of such a
risk, where we denote by P a set of possible real world measures:
the maximal risk that can be incurred, over all possible real world measures, dominated by
its "min-max" counterpart
sup
the upper-value of a fictitious stochastic game between an agent (who tries to choose (-) 2
A(x) so as to minimize his risk) and "the market" (whose "goal" is to choose the real world
measure that is least favorable for the agent). Here, E Q is expectation under measure Q. A
question is whether the "upper-value" (4.2) and the "lower-value" (4.1) of this game coincide
and, if they do, to compute this common value. We shall answer this question only in a very
specific setting as follows. Let P be the "reference" probability measure, as in the previous
sections. We first change the margin requirement (2.9) to a more flexible requirement
where k is a constant such that 1 ? k -
we look at the special case of the constraints given by
In other words, we only consider the case of a market which is incomplete
due to the insufficient number of assets available for investment. In this case
~
and
fbounded progress. meas. processes
We define the set P of possible real world probability measures as follows. Let E be a set of
progressively measurable and bounded processes -(\Delta) and such that
We set
in the notation of (2.14) (note that the reference measure P is not necessarily in P). In other
words, our set of all possible real world probability measures includes all the "equivalent
martingale measures" for our market, corresponding to bounded "kernels" -(\Delta). This way,
under a possible real world probability measure P - 2 P, the model M of (2.1) becomes
d
in the notation of (2.15). The resulting modified model M - is similar to that of (2.1); now
the role of the driving Brownian motion (under P - ), but the stock return rates
are different for different "model measures" P - .
The following theorem shows that, if the uncertainty about the real world probability
measure is large enough (in the sense that all equivalent martingale measures corresponding
to bounded kernels are possible candidates for the real world measure), then the optimal
thing to do in order to minimize the expected risk of hedging a claim C in the market, is
the following: borrow exactly as much money from the bank as is needed to hedge C.
Theorem 4.1 Under the above assumptions we have
In other words, the stochastic game defined by (4.1) and (4.2) has a value that is equal to
the expected loss of the strategy which borrows C(0) \Gamma x from the bank, and then invests
according to the least expensive strategy for hedging the claim C.
Proof: Let (- ; - ) be the strategy from the statement of the theorem, namely the one for
which we have
in the notation of (3.2). Such a strategy exists by Theorem 3.1. It is clear that (4.3) is then
satisfied, so that (-
for all Q 2 P, it also follows that
On the other hand, we have here
K, so that H -
Ito's rule gives, in analogy to (2.7) and in the notation of (2.15),
R tr(u)du d- (t)
for all - 2 D, since - 0 (\Delta)- (\Delta) j 0. Therefore, -
X (\Delta) is a P - \Gammalocal supermartingale bounded
from below, thus also a P - \Gammasupermartingale, by Fatou's lemma. Consequently,
is the expectation under P - measure. Since P - 2 P for all - 2 D, (4.11) and
Jensen's inequality imply
is a consequence of (4.9) and (4.12). 2
Acknowledgements
I wish to thank Ioannis Karatzas for suggesting the use of Koml'os
theorem and for providing me with the reference Schwartz (1986), as well for thorough
readings and helpful comments on the paper.
--R
A characterization of measures of risk.
Applied Nonlinear Analysis.
Optimal portfolio replication.
On the Pricing of Contingent Claims under Constraints.
Annals of Applied Probability
The asymptotic elasticity of utility functions and optimal investment in incomplete markets.
Dynamic L p
New proofs of a theorem of Koml'os.
--TR | hedging;incomplete markets;portfolio constraints;expected loss;dynamic measures of risk |
345891 | On the Minimizing Property of a Second Order Dissipative System in Hilbert Spaces. | We study the asymptotic behavior at infinity of solutions of a second order evolution equation with linear damping and convex potential. The differential system is defined in a real Hilbert space. It is proved that if the potential is bounded from below, then the solution trajectories are minimizing for it and converge weakly towards a minimizer of $\Phi$ if one exists; this convergence is strong when $\Phi$ is even or when the optimal set has a nonempty interior. We introduce a second order proximal-like iterative algorithm for the minimization of a convex function. It is defined by an implicit discretization of the continuous evolution problem and is valid for any closed proper convex function. We find conditions on some parameters of the algorithm in order to have a convergence result similar to the continuous case. | Introduction
. Consider the following dierential system dened in a real
Hilbert space H
where
is dierentiable. It is customary to call this equation non-linear
oscillator with damping. Here, the damping or friction has a linear dependence
on the velocity. This is a particular case of the so called dissipative systems. In fact,
given u solution of
is direct to check that
Thus, the energy of the system is dissipated as t increases. Although (1:1)
appears in various contexts with dierent physical interpretations, the motivation for
this work comes from the dynamical approach to optimization problems.
Roughly speaking, any iterative algorithm generating a sequence fx k g k2IN may
be considered as a discrete dynamical system. If it is possible to nd a continuous
version for the discrete procedure, one expects that the properties of the corresponding
continuous dynamical system are close to those of the discrete one. This occurs,
for instance, for the now classical Proximal method for convex minimization: given
solve the iterative scheme
is a closed proper convex function and @f denotes
the usual subdierential in convex analysis. Prox is an implicit discretization for the
Steepest Descent method, which consists in solving the following dierential inclusion
Under suitable conditions, both the trajectory dened by (SD) and
the sequence fx k g generated by (P rox) converge towards a particular minimizer of
Research partially supported by FONDECYT 1961131 and FONDECYT 1990884.
y Departamento de Ingeniera Matematica, Universidad de Chile, Casilla 170/3 Correo 3, Santiago,
Chile. Email: falvarez@dim.uchile.cl. Fax: (56)(2)6883821.
f (see [5, 6, 7] for (SD) and [18] for (P rox), see also [12] for a survey on these
and new results). The dynamical approach to iterative methods in optimization has
many advantages. It provides a deep insight on the expected behavior of the method,
and sometimes the techniques used in the continuous case can be adapted to obtain
results for the discrete algorithm. On the other hand, a continuous dynamical system
satisfying nice properties may suggest new iterative methods.
This viewpoint has motivated an increasing attention in recent years, see for
instance [1, 2, 3, 4, 8, 13, 14]. In [3], Attouch et al. deal with non convex functions
that have a priori many local minima. The idea is to exploit the dynamics dened
by (1:1) to explore critical points of (i.e. solutions of coercive
(bounded level sets) and of class C 1 with gradient locally Lipschitz, then it is possible
to prove that for any u solution of (1:1) we have 1. The
convergence of the trajectory 1g is a more delicate problem. When is
coercive, an obvious su-cient condition for the convergence of the trajectory is that
the critical points, also known as equilibrium points, are isolated. Certainly, this is not
necessary. In dimension one additional conditions, the solution
always converges towards an equilibrium (see for instance [10]). The proof relies on
topological arguments that are not generalizable to higher dimensions. Indeed, this is
no longer true even in dimension two: it is possible to construct a coercive C 1 function
dened on IR 2 whose gradient is locally Lipschitz and for which at least one solution
of (1:1) does not converge as t ! 1 (see [3]). Thus, a natural question is to nd
general conditions under which the trajectory converge in the degenerate case, that is
when the set of equilibrium points of contains a non-trivial connected component.
A positive result in this direction has been recently given by Haraux and Jendoubi in
[11], where convergence to an equilibrium is established when is analytic. However,
this assumption is very restrictive from the optimization point of view.
Motivated by the previous considerations, in this work we focus our attention on
the asymptotic behavior as t !1 of the solutions of (1:1) when is assumed to be
convex. The paper is organized as follows. In x2 we prove that if is convex and
bounded from below then the trajectory minimizing for . If the
inmum of on H is attained then u(t) converges weakly towards a minimizer of .
The convergence is strong when is even or when the optimal set has a nonempty
interior. In x2.2 we give a localization result for the limit point, analogous to the
corresponding result for the steepest descent method [13]. In x2.4 we generalize the
convergence result to cover the equation
is a bounded self-adjoint linear operator, which we assume to be elliptic: there is
> 0 such that for any x 2 H , h x; xi
We refer to this equation as non-linear
oscillator with anisotropic damping. This equation appears to be useful to
diminish oscillations or even eliminate them, and also to accelerate the convergence
of the trajectory. In x2.3 we give an heuristic motivation of the above mentioned
facts, which is based on an analysis of a quadratic function. Still under the convexity
condition on , x3 deals with the discretization of (1:1). Here, we consider the implicit
scheme
where h > 0. Since is convex, the latter is equivalent to the following variational
problem
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 3
where z
procedure does not require to be dier-
entiable and allows us to introduce the following more general iterative-variational
algorithm k
closed proper convex function and
@ f is the -approximate subdierential in convex analysis. We call (1:2) Proximal
with impulsion method. We nd conditions on the parameters k ; k and k in order
to have a convergence result similar to the continuous case. Finally, in the Appendix
we illustrate with an example the behavior of the trajectories dened by (1:1), and
we also state some of the questions opened by this work. Let us mention that the
rst to consider equation (1:1) for nite dimensional optimization problems was B.
T. Polyack in [16]. He studied a two-step discrete algorithm called \heavy-ball with
friction" method, which may be interpreted as an explicit discretization of (1.1). Both
approaches are complementary: the analysis and the type of results in the implicit
and explicit cases are dierent.
2. Dissipative dierential system. Throughout the paper, H is a real Hilbert
space, h; i denotes the associated inner product and j j stands for the corresponding
norm. We are interested in the behavior at innity of solution of the
following abstract evolution equation
where
are given. Note that if we assume that the
gradient r is locally Lipschitz then the existence and uniqueness of a local solution
for
standard results of dierential equations theory. In that
case, to prove that u is innite extendible to the right, it su-ces to show that its
derivative u 0 is bounded. Set
, the function E is non-increasing. If we suppose that is
bounded from below then u 0 is bounded.
2.1. Asymptotic convergence. In the sequel, we suppose the existence of a
global solution of
for the inmum value of on H ; thus,
mean that is bounded from below. We denote by Argmin the
set fx g. On the nonlinearity we shall assume
Theorem 2.1. Suppose that (h ) holds. If u 2 C 2 ([0; 1[; H) is a solution of
lim
Furthermore, if Argmin 6= ; then there exists b
such that u(t) * b
weakly in H as t !1.
4 F. ALVAREZ
Proof. We begin by noticing that u 0 is bounded (see the above argument). In
order to prove the minimizing property (2:1), it su-ces to prove that
lim sup
for any x 2 H . Fix x 2 H and dene the auxiliary function '(t) := 1
u is solution of
it follows that
which together with the convexity inequality (u)
We do not have information on the behavior of (u(t)) but we know that E(t) is
non-increasing. Thus, we rewrite (2:2) as
Given t > 0, for all 2 [0; t] we have
After multiplication by e
and integration we obtain
We write this equation with t replaced by , use the fact that E(t) decreases and
integrate once more to obtain
where
h(t) := 3Z tZ e
Since E(t) (u(t)), (2:3) gives2
Dividing this inequality by 1(
letting t !1 we get
lim sup
It su-ces to show that h(t) remains bounded as t !1. By Fubini's theorem
e
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 5
Note that from the equality
and in particular
Then
h(t) 3Z tju 0 ()j 2 d 3Z 1ju 0 ()j 2 d < 1
as was to be proved.
On the other hand, since E() is non-increasing and bounded from below by inf ,
it converges as t ! 1. If lim
E(t) > inf , then lim
because of (2:1).
This contradicts the fact that u 0 2 L 2 . Therefore, lim
as t !1.
The task is now to establish the weak convergence of u(t) when Argmin 6= ;.
For this purpose, we shall apply the Opial lemma [15], whose interest is that it allows
one to prove convergence without knowing the limit point. We state it as follows.
Lemma [Opial]. Let H be a Hilbert space , H be a trajectory
and denote by W the set of its weak limit points
If there exists ; 6= S H such that
then W 6= ;. Moreover, if W S then u(t) converges weakly towards b u 2 S as
In order to apply the above result, we must nd an adequate set S. Suppose that
there exists b
H such that u(t k ) * b u for a suitable sequence t k !1. The function
is weak lower-semicontinuous, because is convex and continuous, hence
(bu) lim inf
and therefore b u 2 Argmin . According with the Opial lemma, we are reduced to
prove that
exists:
For this, x z 2 Argmin and dene '(t) := 1
provides a su-cient condition on [' 0 , the positive part of the derivative, in order to
ensure convergence for '.
Lemma 2.2. Let 2 C 1 ([0; 1[; IR) be bounded from below. If [
then (t) converges as t !1.
6 F. ALVAREZ
Proof. Set
w(t) := (t)
Since w(t) is bounded from below and w 0 (t) 0, then w(t) converges as t !1, and
consequently (t) converges as t !1.
On account of this result, it su-ces to prove that [' 0 belongs to L 1 (0; 1). Of
course, to obtain information on ' 0 we shall use the fact that u(t) is solution of
Due to the optimality of z, it follows from (2:2) that
Lemma 2.3. If the dierential inequality
with
Proof. We can certainly assume that g 0, for if not, we replace g by jgj.
Multiplying (2:6) by e
t and integrating we get
Z te
Thus
Z te
and Fubini's theorem gives
Z 1Z te
Recalling that ju 0 the proof of the theorem is completed by
applying lemma 2.3 to equation (2:5).
We say that r is strongly monotone if there exists > 0 such that for any
A weaker condition is the strong monotonicity over bounded sets, that is to say, for
all K > 0 there exists K > 0 such that for any x; y 2 B[0; K] we have
If the latter property holds, then we have strong convergence for u(t) when the inmum
of is attained. The argument is standard: let b
u be the (unique) minimum
point for and set K := maxfsup t0 ju(t)j; jbujg, then from (2:7) we deduce
Since we have proven that lim
b u strongly in H . Note that we do not need to apply the Opial lemma.
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 7
The latter is the case of a non-degenerate minimum point. When admits
multiple minima, it is not possible to obtain strong convergence without additional
assumptions on or the space H . For instance, we have the following
Theorem 2.4. Under the hypotheses of Theorem 2.1, if either
(i) Argmin 6= ; and is even
or
then
u strongly in H as t !1;
Proof. The proof is adapted from the corresponding results for the steepest descent
method; see [7] for the analogous of (i) and [6] for (ii).
quently
decreasing and is even, we deduce that
for all t 2 [0; t 0 ]: By the convexity of we conclude
hence
Thus
The standard integration procedure yields
Therefore, for all t 2 [0;
where
8 F. ALVAREZ
On the other hand, in the proof of Theorem 2.1 we have shown that h(t) is convergent
as t ! 1. We also proved that for all z 2 Argmin the lim
ju(t) zj exists. Since
is convex and even, we have 0 2 Argmin whenever the inmum is realized. In
that case, ju(t)j is convergent as t ! 1 and we infer from (2.9) that
is a Cauchy net. Hence u(t) converges strongly as t ! 1 and, by Theorem 2.1, the
limit belongs to Argmin .
There exists > 0 such that for every z 2 H with
In particular, if jz z 0 j then
Consequently,
for every x 2 H and z with jz z 0 j . Hence,
for every x 2 H . Applying this inequality to x = u(t) we deduce that
We thus obtain
Integrating this inequality yields
But we have already proved that the lim
'(t) exists and lim
As a conclusion, u
We deduce that the
lim
u(t) exists, which nishes the proof because u
2.2. Localization of the limit point. In the proof of Theorem 2.1 we have
used the dierential inequality (2:2), which in some sense measures the evolution of
the system. A simpler but analogous inequality appears in the asymptotic analysis
for the steepest descent inclusion (SD). This was used by B. Lemaire in [13] to locate
the limit point of the trajectories of (SD). Following this approach, in this section we
give a localization result of the limit point of the solutions of (E
). For simplicity of
notation, set S := Argmin and we denote by proj the projection operator
onto the closed convex set S.
Proposition 2.5. Let u be solution of
be such that
Consequently
where is the distance between u 0 and the set S.
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 9
(ii) if S is an a-ne subspace of H then
If moreover is a quadratic form then
strongly in H as t !1:
Proof. Let x 2 S and set '(t) := 1
. The inequality (2:2) and the
optimality of x give
Hence
Z tZ e
Due to the weak lower semi-continuity of the norm and Fubini's theorem, we can let
t !1 to obtain2 jbu xj 2 1
On the other hand, from the energy equation2 ju
it follows Z 1ju 0 ()j 2 d 1
Replacing the last estimate in (2.11), it easy to show that (2.10) holds.
For (i), it su-ces to take
For (ii), let e := b u proj S
which belongs to S. An easy computation shows that
which together with (2.10) yields
Letting r !1 we get the result.
Finally, suppose that
positive and self-adjoint
bounded linear operator. Then the null space of A.
Let z 2 S; for all t 0 we have that
strongly ( is even) as t !1, we can deduce that
for all z 2 S, which completes the proof.
2.3. Linear system: heuristic comparison. Before proceeding further it is
interesting for the optimization viewpoint to compare the behavior of the trajectories
dened by
with the steepest descent equation
and with the continuous Newton's method
For simplicity, in this section we restrict ourselves to the associated linearized systems
in a nite dimensional space. We shall consider and assume that 2
IR). Related to (SD) we have the linearized system around some x 0 2 IR N ,
which is dened by
We assume that the Hessian matrix r 2 positive denite. An explicit computation
shows that In fact, the solutions
of (LSD) are of the form
solves the homogeneous equation
0: Take a matrix P such that
where i > 0, and set P We obtain the system 0
solutions
are i Generally speaking, if there is a i << 1, we will have a relative
slow convergence towards the solution; on the other hand, when dealing with large 0
the numerical integration by an approximate method will present stability problems.
Thus we see that the numerical performance of (SD) is strongly determined by the
local geometry of the function .
We turn now to the linearized version of (N ), given by
The solutions are of the form
which are much better than the
previous ones. The major properties are : 1) the straight-line geometry of the tra-
jectories; 2) the rate of convergence is independent of the quadratic function to be
minimized. Certainly, this is just a local approximation of the original function and
the global behavior of the trajectory may be complicated. Nevertheless, this outstanding
normalization property of Newton's system makes it eective in practice, due to
the fact that the associated trajectories are easy to follow by a discretization method.
Of course, an important disadvantage of (N) is the computation of the inverse of the
Hessian matrix, which may be involved for a numerical algorithm.
Finally, we consider
z
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 11
For this equation we have solves the homogeneous problem
It is a simple matter to show that
)t with i :]0; 1[!]0; 1[
continuous and C i a constant independent of
. In fact, i (
is non-increasing on ]2 p
then the corresponding
does not present oscillations. Thus the choice
greatest rate that can be obtained. But we can get any value in the interval ]0; p
for instance, when i > 1 we obtain
1. The
last choice has the advantage that the associated trajectory is not oscillatory, which
is interesting by numerical reasons. Note that we should take a dierent parameter
according to the corresponding eigenvalue i . See the Appendix for an illustration of
this simple analysis.
Therefore, the presence of the damping parameter
gives us a control on the
behavior of the solutions of (E
and, in particular, on some qualitative properties
of the associated trajectories. For a general we must take on account: a) a careful
selection of the damping parameter
should depend on the local geometry of the
function , leading to a nonautonomous damping; b) this selection could give a different
value of
for some particular directions, leading to an anisotropic damping.
No attempt has been made here to develop a theory in order to guide these choices.
2.4. Linear and anisotropic damping. In the preceding section we have seen
that it may be of interest to consider an anisotropic damping. With the aim of
contributing to this issue, in this section we establish the asymptotic convergence for
the solutions of the following system
H is a bounded self-adjoint linear operator, which we assume to be
elliptic
> 0 such that for any x 2 H; h x; xi
Theorem 2.6. Suppose (h ) and (h ) hold. If u 2 C 2 ([0; 1[; H) is a solution
of
lim
Furthermore, if Argmin 6= ; then there exists b
such that u(t) * b
weakly in H as t !1.
Proof. We only need to adapt the proof of Theorem 2:1. First, note that the
properties of existence, uniqueness and innite extendibility to the right of the solution
follow by similar arguments. Likewise, the energy E(t) := 1
and we can deduce that u
Next, dene the operator
x, with
such a way that
As in the proof of Theorem 2:1, equation (2:13) gives
Z te
(2.
with
the only dierence being the term
An integration by parts yields
Z te
Z te
Setting f(t) :=
Z te
Thus, we can rewrite (2:14) as
We leave it to the reader to verify that the minimizing property (2:12) can now be
established as in Theorem 2:1. Analogously for the proof of u
When Argmin 6= ;, we x z 2 Argmin and consider the corresponding functions
' and as above (with x replaced by z). Using the optimality of z, it follows
Z te
with f associated with as above. Integrating this inequality we conclude that '(t)
stays bounded as t ! 1, but we cannot deduce its convergence. Then, we rewrite
(2:15) in the form
Z te
Z te
and we conclude that [' 0 (t)
We note that
Z te
where
and
Z te
In virtue of Lemma 2.2, if we show that (t) is bounded from below then (t)
converges as t ! 1. Since 0 there exists a constant M > 0
independent of t such that j 0 (t)j M ju 0 (t)j
'(t) for any t > 0. We conclude that
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 13
1. From this fact it follows easily that (t) ! 0 as t ! 1.
Therefore, converges as t !1, hence (t) converges as well.
The proof is completed by applying the Opial lemma to the trajectory
1g, where the Hilbert space H is endowed with the inner product hh; ii
dened by hhx; yii := 1
h x; yi and its associated norm.
Remark 1. In Theorems 2.1 - 2.6 we do not require any coerciveness assumption
on . When Argmin 6= ;, the dissipativeness in the dynamics su-ces for the
convergence of the solutions. If the inmum value is not realized the trajectory may
be unbounded as in the one dimensional equation
whose solutions are so that u(t) ! 1 and u
In any case, our results assert that the dynamical system dened by (E
(or more
generally by (E )) is dissipative in the sense that every trajectory evolves towards a
minimum of the energy. Certainly, there is a strong connection with the concept of
point dissipativeness or ultimately boundedness in the theory of dynamical systems,
where the Lyapunov function associated with the semigroup is usually supposed to
be coercive (c.f. [9, gradient systems]).
Remark 2. To ensure local existence and uniqueness of a classical solution
for the dierential equation, it su-ces to require a local Lipschitz property on r.
Actually, in some situations this hypothesis is not necessary and the existence may
be established by other arguments. For instance, that is the case of the Hille-Yosida
theorem for evolution equations governed by monotone operators and the theory of
linear and non-linear semigroups for partial dierential equations. Note that such
a Lipschitz condition on the gradient is not used in the asymptotic analysis of the
trajectories. Therefore, the previous asymptotic results remain valid for other classes
of innite-dimensional dissipative systems, provided the existence of a global solution.
It is not our purpose to develop this point here for the continuous system because it
exceeds the scope of this paper. However, in the next section we consider an implicit
discretization of the continuous system. As we will see, the existence of the discrete
trajectory is ensured by variational arguments. This allow us to apply the discrete
scheme to nonsmooth convex functions and to adapt the asymptotic analysis to this
case.
3. Discrete approximation method. Once we have established the existence
of a solution of an initial value problem, we are interested in its numerical values. We
must accept that most dierential equations cannot be solved explicitly; we are thus
lead to work with approximate methods. An important class of these methods is based
on the approximation of the exact solution over a discrete set ft n g: associated with
each point t n we compute a value un , which approximates u(t n ) the exact solution at
Generally speaking, these procedures have the disadvantage that a large number
of calculations has to be done in order to keep the discretization error e n := un u(t n )
su-ciently small. In addition to this, the estimates for the errors strongly depends on
the length of the discretization range for the t variable. It turns out that these methods
are not well adapted to the approximation of the exact solution on an unbounded
domain.
Nevertheless, there is an important point to note here. If our objective is the
asymptotic behavior of the solutions as t goes to 1, then the accurate approximation
of the whole trajectory becomes immaterial. We present a discrete method whose
feature is that no attempt is made to approximate the exact solution over a set of
14 F. ALVAREZ
points, but the discrete values are sought only to preserve the asymptotic behavior of
the solutions.
3.1. Implicit iterative scheme. Dealing with the discretization of a rst order
dierential equation y it is classical to consider the implicit iterative scheme
y
where h > 0 is a parameter called step size. In the case of equation (E
or more
precisely its rst order equivalent system, (3:1) corresponds to recursively solve
Since is convex, (3:2) is equivalent to the following variational problem
where z
This motivates the introduction of the more general
iterative procedure
where z are positive. Note that when
the standard Prox iteration. If > 0, the starting point for the next iteration is
computed as a development in terms of the velocity of the already generated sequence.
Therefore, this iterative scheme denes a second order dynamics, while Prox is actually
of rst order nature.
We have been working under the assumption that is dierentiable. However,
for the above iterative variational method this regularity is no longer necessary. Thus,
in the sequel f denotes a closed proper convex function (see [17]),
which eventually realizes the value 1, and we consider
where z In terms of the stationary condition, (3:3) is equivalent
to
where @f is the standard convex subdierential [17].
3.2. Convergence for the variational algorithm. By numerical reasons, it
is natural to consider the following approximate iterative scheme k
where k in non-negative, k is positive and @ f is the -subdierential. Note that a
sequence fu k g H satisfying (3:4) always exists. Indeed, given u
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 15
take u k+1 as the unique solution of the strongly convex problem
as above.
Theorem 3.1. Assume that f is closed proper convex and bounded from below.
Let fu k g H be a sequence generated by (3:4), where
is bounded from below by a positive constant.
(ii) the sequence f is non-increasing and
Then
lim
and in particular lim
When Argmin f 6= ;, assume in addition that
(iii) there exists
2]0; 1[ such that 0 k
, and f k g is bounded from above
if there is at least one k > 0.
Then, there exists b u 2 Argmin f such that u k * b
u weakly as k !1.
Proof. The proof consists in adapting the analysis done for the dierential equation
We begin by dening the discrete energy by
and we study the successive dierence E
By denition of @ k
f , (3:4) yields
As we can write
we have
and consequently
Noting that
and because 0 k 1, we deduce that
and
As 0 k 1 and k is bounded from below by a positive constant, we have
lim
Writing
we conclude that (3:5) holds.
Suppose now that Argmin f 6= ;. We apply the Opial lemma to prove the weak
convergence of fu k g. On account of (3:5), it is su-cient to show that for any z 2
Argmin f , the sequence of positive numbers fju k zjg is convergent. Fix z 2 Argmin f ;
since u k+1 satises (3:4), we have
and by the optimality of z
. It is direct to check that for any k 2 IN
that
and therefore
Using (iii) and (3:6) it follows
that
the above inequality
implies
Thus
which yieldsX
SECOND-ORDER DISSIPATIVE SYSTEM AND MINIMIZATION 17
is bounded from
below. As fw k g is non-increasing we have that it converges. Hence, f' k g converges,
which completes the proof of the theorem.
For simplicity, we have considered in this section the isotropic damping system.
However, a similar analysis can be done for the anisotropic damping associated with
an elliptic self-adjoint linear operator . The variational problem associated
with the implicit discretization is
where z
For a function f closed proper and convex, the latter motivates the
scheme
H is a linear positive denite operator and S linear
and positive semi-denite. If we assume both R and I S are elliptic, it is possible
to obtain a convergence result like the previous one. It su-ces to adapt the main
arguments. Since the basic ideas are contained in the proof of Theorems 2.6 and 3.1,
we do not go further in this matter.
4. Some open problems. In the case of multiple optimal solutions, our convergence
results does not provide additional information on the point attained in the
limit. A possible approach to overcome this disadvantage may be to couple the dissipative
system with approximation techniques as regularization, interior-barrier or
globally dened penalizations and viscosity methods. In the continuous case, this
alternative has been considered with success for the steepest descent equation in [2]
and for Newton's method in [1], giving a characterization for the limit point under
suitable assumptions on the approximate scheme. On account of these results, one
may conjecture that this can be done for the equations considered in the present work.
On the other hand, we have seen that the behavior of the trajectories depends
on a relation between the damping and the local geometry of the function we wish to
minimize. This remark leads us to the obvious problem of the choice of the damping
parameter, in order to have a better control on the trajectory. This is also a problem
in the discrete algorithm. Usually we have an incomplete knowledge of the objective
function, which makes the question more di-cult. We think that a rst step in this
direction may be the study of more general damped equations, with non-linear and/or
non-autonomous damping.
Acknowledgments
. I wish to express my gratitude to the Laboratoire d'Analyse
Convexe de l'Universite Montpellier II for the hospitality and support, and specially
to Professor Hedy Attouch. I gratefully acknowledge nancial support through a
French Foreign Scholarship grant from the French Ministry of Education and a Chilean
National Scholarship grant from the CONICYT of Chile. I wish to thank the helpful
comments of a referee concerning theorems 2.2 and 3.1.
--R
A dynamical approach to convex minimization coupling approximation with the steepest descent method
A dynamical method for the global exploration of stationary points of a real-valued mapping: the heavy ball method
The nonlinear geometry of linear programming (parts I and II)
Monotonicity methods in Hilbert spaces and some applications to nonlinear partial di
Asymptotic convergence of nonlinear contraction semi-groups in Hilbert spaces
Asymptotic convergence of the steepest descent method for the exponential penalty in linear programming
Convergence of solutions of second-order gradient-like systems with analytic nonlinearities
About the convergence of the proximal method
An asymptotical variational principle associated with the steepest descent method for a convex function
The projective SUMT method for convex programming
Weak convergence of the sequence of successive approximations for nonexpansive mappings
Some methods of speeding up the convergence of iterative methods
--TR
--CTR
A. Moudafi , M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, Journal of Computational and Applied Mathematics, v.155 n.2, p.447-454, 15 June | convexity;linear damping;implicit discretization;asymptotic behavior;iterative-variational algorithm;weak convergence;dissipative system |
345912 | Analysis of a local-area wireless network. | To understand better how users take advantage of wireless networks, we examine a twelve-week trace of a building-wide local-area wireless network. We analyze the network for overall user behavior (when and how intensively people use the network and how much they move around), overall network traffic and load characteristics (observed throughput and symmetry of incoming and outgoing traffic), and traffic characteristics from a user point of view (observed mix of applications and number of hosts connected to by users). Amongst other results, we find that users are divided into distinct location-based sub-communities, each with its own movement, activity, and usage characteristics. Most users exploit the network for web-surfing, session-oriented activities and chat-oriented activities. The high number of chat-oriented activities shows that many users take advantage of the mobile network for synchronous communication with others. In addition to these user-specific results, we find that peak throughput is usually caused by a single user and application. Also, while incoming traffic dominates outgoing traffic overall, the opposite tends to be true during periods of peak throughput, implying that significant asymmetry in network capacity could be undesirable for our users. While these results are only valid for this local-area wireless network and user community, we believe that similar environments may exhibit similar behavior and trends. We hope that our observations will contribute to a growing understanding of mobile user behavior. | INTRODUCTION
More companies and schools are installing wireless networks
to support a growing population of mobile laptop and PDA users.
Part of the motivation for these installations is to reduce the costs
of running cable. Another important motivation is to meet the
demands of users who wish to stay connected to the network,
communicating with others and accessing on-line information no
matter where they are.
In this paper, we analyze a 12-week trace of a local-area
wireless network installed throughout the Gates Computer
Science Building of Stanford University. Our goal is to answer
questions such as how much users take advantage of mobility,
how often we observe peak throughput rates, what causes the
peaks, and what application mix is used. This study is similar in
nature to a previous study [14], however the scale of this network
is much smaller, and its characteristics in terms of delay and
bandwidth are much more favorable. We thus find that not all
questions asked previously make sense in this context; for
instance, we do not analyze user mobility for frequently-used
paths through the network. In contrast to the previous study,
however, we explore information about network data traffic and
can ask questions about application mixes, symmetry of outgoing
and incoming traffic, and traffic throughput.
Amongst other results, we find that users fall into distinct
location-based sub-communities, each with its own behavior
regarding movement and periods of activity. We find that almost
all users run some version of Windows at least some of the time
and exploit the network for web-surfing activities. Besides other
house-keeping activities (such as dns, icmp, and setting the time),
many people also use their laptops for session-oriented activities
(such as ssh and telnet) and chat-oriented activities (such as talk,
icq, irc, and zephyr). The high number of chat-oriented activities
shows that some users take advantage of the mobile network for
synchronous communication with others. In addition to these
user-specific results, we find that peak throughput is caused 80%
of the time by a single user and application. Also, while incoming
traffic dominates outgoing traffic overall (34 billion bytes
compared to 12 billion bytes), the opposite tends to be true during
periods of peak throughput, implying that significant asymmetry
in network capacity could be undesirable for our users.
We hope that the results we present here will help
researchers and developers determine how users take advantage of
a local-area wireless network, helping to focus efforts on topics
that will achieve the most improvement in user experience. While
these results are only necessarily valid for this particular local-area
wireless network and user community, we believe that similar
environments may exhibit similar behavior and trends.
In this paper, we first present background information about
the data we collected and then present the results of our analysis.
We divide the analysis into three sections: overall user behavior,
overall network traffic characteristics, and user traffic
characteristics. We also comment on the network data
Table
1: Brief summary of the wireless network and
community in the Gates Computer Science Building.
Total number of access points 12
Number of floors in building 6
Approximate area covered by an access point 75ft x 150ft
Number of wireless users 74
Figure
1: The public subnet and its connectivity to the rest of
the departmental and university networks and the Internet. An
AP is an access point for wireless connectivity.
visualization tools we used, describe related work, and list some
possible directions for future work.
2. BACKGROUND
In this section we describe the network analyzed and our
tracing methodology. In the Gates Computer Science Building at
Stanford University, administrators have made a "public" subnet
available for any user affiliated with the university [1]. Users
desiring network access via this subnet must authenticate
themselves to use their dynamically assigned IP address [5] to
access the rest of the departmental and university networks and
the Internet.
This subnet, as shown in Figure 1 and described in Table 1,
is accessible both from a wireless network and from Ethernet
ports in public places in the building, such as conference rooms,
lounges, the library, and labs. The wireless network is a
WaveLAN network with WavePoint II access points acting as
bridges between the wireless and wired networks [15]. The access
points each have two slots for wireless network interfaces; both
slots are filled, one with older 2 Mbps cards to support the few
users who have not updated their hardware yet, and the other with
cards.
To help explain the results we present in the next sections,
we briefly describe our building and its user community. The
building is L-shaped (the longer edge is called the a-wing, and the
shorter the b-wing). It has four main floors with offices and labs, a
basement with classrooms and labs, and a fifth floor with a lounge
and a few offices. Each of the main floors has two access points,
one for each wing. Additionally, the first floor has an access point
for a large conference room; the library, which spans both the
second and third floors, also has an access point. The basement
has two access points, one near the classrooms and one for the
Interactive Room, a special research project in the department [7].
The smaller fifth floor only has one access point.
The wireless user community consists of 74 users who can be
roughly divided into four groups:
. first year PhD students, who were each given a laptop
with a WaveLAN card upon arrival (which corresponds to
the beginning of the trace). Their offices are primarily in the
wing.
. 22 graphics students and staff, the majority of whom
received laptops with WaveLAN cards a week into the
tracing period. Their offices are primarily in the 3b wing.
. Three robots, used by the robotics lab for research. The
robots do not have to authenticate themselves to reach the
outside network. While the robots are somewhat mobile, they
stay in the 1a wing. Although these WaveLAN cards are
intended to be used by the robots, students in the robotics lab
also use the network cards for session connections and web-
surfing.
. 14 other users (students, staff, and faculty) scattered
throughout the building.
In addition to these 74 users, there were also four users who
authenticated themselves but only connected to wired ports on the
public subnet rather than the wireless network. We do not
consider these users in the rest of this analysis of the wireless
network.
We obtained permission to collect these traces from the
Department Chair and informed all network users that this tracing
was taking place. We additionally informed users we would
record packet header information only (not the contents) and that
we would anonymize the data. Knowledge of the tracing may have
perturbed user behavior, but we have no way of quantifying the
effect.
Because all of the wireless users are on a single subnet
(which promotes roaming without the need for Mobile IP or other
such support), we gathered traces on the router shown in Figure 1
that connects the public subnet to the rest of the departmental
wired network. The router is a 90 MHz Pentium running RedHat
Linux with two 10 Mbps network interfaces. One interface
connects to the public subnet, and the other connects to the
departmental network.
To gather all of the information we wanted, we collected
three separate types of traces during a 12-week period
encompassing the 1999 Fall quarter (from Monday, September 20
through Sunday, December 12). The first trace we gathered is a
tcpdump trace of the link-level and network-level headers of all
packets that went through the router [9]. We use this information
in conjunction with the other two traces.
The second trace is an SNMP trace [4]. Approximately every
two minutes, the router queries, via Ethernet, all twelve access
points for the MAC addresses of the hosts currently using that
access point as a bridge to the wired network. Once we know
which access point a MAC address uses for network access, we
know the approximate location (floor and wing) of the device with
that MAC address. We pair these MAC addresses with the link-
12am 4am 8am 12pm 4pm 8pm
Hour of the Day3915Average
Number
of
Users
Stationary
Mobile
Figure
2: The average number of active users of the mobile
network each hour of the day. Each hour has two bars, the left
one for weekends and the right one for weekdays. The
darkness of the bar indicates whether the users are stationary
or mobile (active at two or more access points) during that
hour. For example, the highlighted bars show that at 2pm, on
average 16.2 users use the network on weekdays (2 of which,
on average, visit at least 2 locations over the course of that
hour), and on average, 6.6 users use the network on weekends
(0.5 of which, on average, visit at least 2 locations over the
course of that hour).
Day in the Trace515253545
Total
Number
of
Users
Stationary
Mobile
Figure
3: The total number of users of the mobile network per
day. The 0 th day is Monday, the 1 st day is Tuesday, etc. The
darkness of the bar indicates whether the users are stationary
or mobile during that day.
level addresses saved in the packet headers to determine the
approximate locations of the hosts in the tcpdump trace.
The overhead from the SNMP tracing is low: 530 packets or
50 KBytes is the average overhead from querying all twelve
access points every two minutes. The overhead for querying an
individual access point is 3.2 KBytes if no MAC addresses are
using that access point; otherwise, the base overhead is 14.5
KBytes for one user at an access point, plus 1 KByte for every
additional user.
The last trace is the authentication log, which keeps track of
which users request authentication to use the network. Each
request has both the user's login name as well as the MAC address
from which the user makes the request. We pair these MAC
addresses with the link-level addresses saved in the tcpdump trace
to determine which user sends out each packet.
We use the common timestamp and MAC address
information to combine these three traces into a single trace with a
total of 78,739,933 packets attributable to the 74 wireless users.
An additional 37,893,656 packets are attributable to the SNMP
queries and 1,551,167 packets are attributable to the four wired
users. The number of packets attributable to the SNMP queries
might seem high, but each access point is queried every two
minutes even if no laptops are actively generating traffic.
Thus, for every packet sent over the course of this twelve-
week period we record:
. a timestamp,
. the user's identity,
. the user's location (current access point),
. the application, if the port is recognized, otherwise, the
source and destination ports,
. the remote host the user connects to,
. and the size of the packet.
Note that because we do not record any signal strength
information, and since our access points generally cover a whole
wing of a floor, we cannot necessarily detect movement within a
wing but only movement between access points.
3. OVERALL USER BEHAVIOR
In this section we consider the network-related behavior of
users, focusing on their activity and mobility. Specifically, we ask
the following questions:
1. When and how often do people use the network?
2. How many users are active at a time?
3. How much do users move?
The answers to these questions help researchers understand
whether and how users actually take advantage of a mobile
environment. Also, by understanding user behavior, network
planners can better plan and extend network infrastructure.
In general we find that most users do not move much within
the building, but a few users are highly mobile, moving up to
seven times within an hour. We also find that users fall into
location-based sub-communities, each with its own movement and
activity characteristics. For example, the sub-community in the 2b
wing tends to move around a fair amount and use the network
sporadically, whereas the 4b wing sub-community steadily uses
the network but does not move around very much.
3.1 Active Users
We first look at average user activity by time of day. We
consider a user to be active during a day in the trace if he sends or
receives a packet sometime during that day. We see from Figure 2
that on weekdays more people use the network in the afternoon
than at any other time (on average there are 12 to 16 users in the
mid-afternoon, with a maximum of 34 users between 2 and 4 in
the afternoon). We also see from the steady number of users
throughout the night and weekend that four to five users, on
Number of Active Days2610
Number
of
Users
Figure
4: Number of active days for mobile users over the
course of the entire trace.
Number of Access Points Visited5152535Total
Number
of
Users
Figure
5: Number of users visiting some number of access
points over the course of the entire trace.
basement iroom 104 1a 1b 2a 2b
library 3a 3b 4a 4b 5
Access Point Location39Maximum
Number
of
Users
Figure
Maximum number of users each access point handles
within a five-minute period. iroom = Interactive Room.
basement iroom 104 1a 1b 2a 2b
library 3a 3b 4a 4b 5
Access Point Location39Maximum
Number
of
Handoffs
Figure
7: Maximum number of handoffs each access point
handles within a 5-minute or 15-minute period. iroom = the
Interactive Room in the basement.
average, leave their laptops turned on in their offices rather than
take them home.
Figure
3 is a graph analogous to Figure 2, presenting the
number of active users per day in the trace rather than per hour of
the day. We observe a weekly pattern with more active users
during the week than on weekends. We also note some trends
across the course of the trace: the network supports the most users
at the beginning of the trace (up to 43 on the first Friday of the
trace), when many users first received their laptops, with a lull in
the middle of the quarter corresponding to midterms and
comprehensive exams, followed by an upswing corresponding to
final project due dates, before a drop during finals week and an
exodus for winter vacations. We also believe that the number of
users falls off as new Ph.D. students received their permanent
office assignments elsewhere in the building. It seems that many
users still prefer stationary desktop machines over laptops when
both are available to them.
Figure
presents overall activity from a user point of view:
the total number of days users are active during the traced period.
While some users rarely connect their laptops to the network (17
users do so on 5 days or fewer), others connect their laptops
frequently (14 users are active at least 37 days during the traced
period).
3.2 User Mobility
We next explore user mobility. Turning back to Figure 2, we
see information about average user mobility by time of day. Most
users are stationary, meaning they do not move from one access
point to another. Only a few users (1.3 on average) move between
access points during any given hour. However, some users are
highly mobile with a maximum of seven location changes for a
user within an hour. We can now look at Figure 3 to see how
many users are mobile on a daily basis rather than an hourly basis.
Figure
8: Overview of the throughput trends over the entire trace, both in bytes (maximum of 5.6 Mbps) and packets (maximum of
1,376 packets per second), as well as the number of access points (AP's, maximum of 9 simultaneous AP's), applications (maximum
of 56 simultaneous applications), and users (maximum of 17 simultaneous users) responsible for generating the traffic.
Table
2: Brief description of the activity at each access point
throughout the course of the trace.
Access
Point
Description
basement occasional spikes corresponding to meetings
iroom big peak in weeks 8, 9 (project deadline)
104 occasional spikes corresponding to meetings
1a heavy usage weeks 1-3, occasional afterwards
1b occasional usage corresponding to network testing
2a occasional usage, small peak towards end
2b closely follows overall pattern in Figure 2
library meetings in weeks 1-3, slight peak weeks 6-7
3a lower usage, follows overall pattern in Figure 2
3b follows overall pattern in Figure 2
4a 1-2 users regularly
4b 1-3 users constantly
5 1-2 users in late afternoons, Monday-Friday
The number of mobile users is high towards the beginning of the
trace, with up to 13 mobile users during a day, and decreases
towards the end of the trace, to only one to two mobile users
during a day. As in Figure 2, however, we see that most users are
stationary on any given day with only a few (3.2 on average)
moving around.
Looking at total mobility across the trace in Figure 5, we see
that while 37 users are stationary throughout the entire period, a
few users exploit the mobile characteristic of the network: 13
users visit at least five distinct access points during the course of
the trace and one user visits all twelve access points.
3.3 User Sub-communities
We now turn to location-based user behavior by associating
user activity and mobility with access points. Figure 6 shows that
the access points in the 2b and 3b wings handle the most users (up
to 12 or 10 users, respectively, within a five-minute period),
which is not surprising given the large number of mobile users
with offices in those two wings. Figure 7 shows how many
handoffs access points have to handle. Contrasting Figure 6 with
Figure
7, we see that the number of users is not necessarily
correlated with movement. The users on the 3b wing rarely move,
while the users on the 2b wing move around more often. The few
users in the 1b wing move even more frequently.
Table
summarizes user activity by access point location.
The basement and the conference room in 104 are primarily used
only when meetings occur, while the 5th floor lounge is used
when people take a break in the late afternoon. The 4th floor users
are steady users who rarely move, the 3rd floor users connect to
the network more sporadically, and the 2nd floor users are also
sporadic but more mobile. These results reveal that while each
access point covers approximately the same amount of space, the
load on each access point depends on the behavior of the
community it serves.
3.4 Access Point Handoffs
One side effect of user mobility is the need for access points
to perform handoffs. We thus take a closer look at how many
handoffs access points handle. A handoff is defined as a user
appearing at one access point and then moving to a different
access point within a given period of time. Looking at Figure 7,
we see that handoffs are not a major burden on access points: an
access point handles at most five handoffs within a five-minute
period, or ten within a 15-minute period. Note that 95% of all user
location changes occur within 15 minutes.
4. OVERALL NETWORK TRAFFIC
CHARACTERISTICS
In this section we consider overall characteristics of the
network, such as throughput, peak throughput, and incoming and
outgoing traffic symmetry. Specifically, we ask the following
questions:
1. What is the throughput through the router? Through the
access points?
2. What is the peak throughput?
3. How often is peak throughput reached?
4. What causes the peaks? Several users or only one or
two? Multiple applications or only a few?
5. How symmetric is the traffic? (How similar is incoming
traffic to outgoing traffic?)
6. How much traffic is attributable to small versus large
packets?
The answers to these questions help determine how wireless
hardware and software should be optimized to handle the amounts
of traffic wireless networks generate. Such optimizations may
Table
3: Maximum throughput attained through the router,
the public Ethernet ports, and each access point.
Location Max Packets Max Bits % of Peaks >
3 Mbps
router 1,376 pps 5.6 Mbps 100.00%
ethernet 1,096 pps 5.1 Mbps 5.80%
basement 530 pps 3.2 Mbps 0.10%
iroom 446 pps 3.6 Mbps 3.30%
1a 521 pps 3.4 Mbps 0.60%
1b 455 pps 3.6 Mbps 2.00%
104 429 pps 3.4 Mbps 0.70%
2a 783 pps 3.1 Mbps 0.01%
2b 824 pps 4.5 Mbps 8.90%
library 745 pps 4.5 Mbps 2.40%
3a 737 pps 3.6 Mbps 6.30%
3b 883 pps 4.6 Mbps 69.70%
4a 804 pps 1.7 Mbps 0.00%
4b 675 pps 3.9 Mbps 0.07%
5 703 pps 3.4 Mbps 0.10%
include using asymmetric links or optimizing for a few large
packets versus many smaller packets.
While we believe that latency is critical to users, the latency
of the WaveLAN network is equivalent to wired ethernet, and we
thus choose not to analyze our trace for this metric. The latency
users see on our network is attributable almost entirely to the
outside network, especially the Internet [12].
In general, we find that router throughput reaches peaks of
5.6Mbps and that peak throughput is caused 80% of the time by a
single user and application, usually a large file transfer. On
average, the incoming traffic is heavier than outgoing traffic, but
the periods of peak throughput are actually skewed more towards
outgoing bytes. From this result, we conclude that significant
asymmetry in network capacity would not be desirable for our
users. We also find that in our network's application mix, low per-packet
processing overhead to handle many small packets is just
as important as high overall attainable byte throughput.
4.1 Network Throughput
Figure
8 gives an overview of throughput over the traced
period, as well as how many access points, users, and applications
are responsible for generating the traffic. Throughput through the
router is typically around one to three Mbps. Usually, the
throughput as a whole increases as the number of users increases.
The throughput through the router reaches peaks of 5.6 Mbps.
Table
3 shows the maximum throughput attained through the
router and each access point. In no case is the peak throughput
maintained for more than three seconds, indicating that the
network is not overwhelmed, but rather that traffic is heavy
enough to hit the peak rate on occasion.
For the majority of peaks, the maximum throughput is
achieved by a single user and application, rather than distributed
across several users, as we might expect since the access points
with the largest peaks are also the access points with the most
users. Specifically, of the 1,492 peaks of magnitude 3.6 Mbps or
greater, 80% of those peaks have 94% of their traffic generated by
a single user and application, and 97% of the their traffic
generated by a single user. The application responsible for 53% of
those peaks is ftp, with web traffic responsible for 15%, and the
remainder caused by applications such as X, session traffic (e.g.,
ssh and telnet), and mail downloads (e.g., eudora, imap, and pop).
From this data, we also observe that evidence of user
subcommunities with different behaviors carries over to traffic
throughput characteristics. While the wings with the most users
(2b and 3b) also have the highest peak throughput, the users on
the 3b wing attain that throughput more often (69% of peaks of
magnitude greater than 3 Mbps are attributable to the 3b wing),
indicating that although these users may not be very mobile, their
traffic causes more load on the network.
4.2 Network Symmetry
Another network characteristic we investigate is the
symmetry of incoming and outgoing traffic. We might expect that
because the most common application is web-surfing (see
Section 5) that incoming packets and bytes would overwhelm
outgoing packets and bytes. Instead, we find that while the total
incoming traffic (34 billion bytes and 62 million packets) is larger
than the total outgoing traffic (12 billion bytes and 56 million
packets), the peaks are actually skewed more towards outgoing
traffic. Of the peaks of magnitude greater than 3.6 Mbps, 60% are
dominated by outgoing rather than incoming traffic. From this
data, we conclude that significantly asymmetric capacity in
wireless networks would be undesirable to users in environments
similar to ours.
4.3 Packet versus Byte Throughput
The last overall network characteristic we explore is how
packet throughput differs from byte throughput. Figure 9 presents
a closer look into the distribution of packet sizes in the network,
showing that over 70% of packets are smaller than 200 bytes.
However, this same number of packets represents only about 30%
of all bytes transmitted. We thus conclude that low per-packet
processing overhead is just as important to users in this
environment as high overall attainable throughput. Note that
fragmented packets are not reassembled for this graph. However,
of the 78,738,933 total packets, only 206,895 (0.26%) are
fragments and should therefore not impact the distribution much.
We further look at the packet size distribution across several
commonly used applications, shown in Figure 10, to determine
how these applications can be categorized in terms of packet size.
We see that http and database applications should be optimized to
handle large incoming and small outgoing packets. In contrast,
session, chat, mail, and X applications should be optimized to
handle many small outgoing and incoming packets. While the
optimizations for mail, an application for asynchronous personal
communication, may be independent of latency, the optimizations
for session, chat, and X applications must not only optimize for
the many small packets but also minimize delay to facilitate user
interactivity.
5. USER TRAFFIC CHARACTERISTICS
In this section we consider traffic characteristics from a user
perspective. The specific questions we ask in this section are:
Packet Size (bytes)0.10.30.50.70.9
Cumulative
Percentage
of
Packets,
Bytes
Packets
Bytes
Figure
9: Cumulative histogram showing the percentage of
packets that are a certain number of bytes long and the
percentage of bytes transferred by packets of that length.
web session ftp mail db chat news license xaudio X
finger filesys dns icmp netbios bootp kerberos house other unknown
Application100300500700900110013001500Median
Packet
Size
All
Incoming
Outgoing
Figure
10: Median incoming and outgoing packet sizes for
some commonly used applications. Session applications include
ssh and telnet; mail includes pop, imap, and Eudora; filesys
includes nfs and afs; chat includes talk, icq, zephyr, and irc;
house includes housekeeping applications such as ntp.
Table
4: The most common applications by user, incoming
packets and bytes, and outgoing packets and bytes (all in
millions). The first group of applications contains basic
services, the second group contains client applications, and the
last group contains other (the aggregation of all other
recognized applications) and unknown (the aggregation of the
unrecognized packets). Session applications include ssh and
telnet; mail includes pop, imap, and Eudora; filesys includes
nfs and afs; chat includes talk, icq, zephyr, and icq; house
includes housekeeping applications such as ntp.
App
Class
Users
Inc.
Pkts
Num.
Inc.
Bytes
Out.
Pkts
Out.
Bytes
dns 74 0.2
netbios 71 7.3 6200 7.2 1200
bootp 56 0.007 2.3 0.007 2.2
house 73 0.046 10.8 0.054 6.8
web 73 14.4 15700 11 1100
session
mail
db 44 0.005 6.4 0.002 0.2
chat 38 0.03 14.7 0.03 2.2
news 21 0.24 266 0.15 9.6
license
finger
filesys
other
unknown 68 6 3500 3.7 1000
1. Which applications are most common?
2. How much does application mix vary by user?
3. How many hosts do users connect to?
4. How long are users active?
Answering these questions helps determine which
applications and application domains to optimize for mobile
usage. Knowing the traffic mix and how it varies by time can also
help researchers model user traffic better, which is important
when simulation is used to evaluate mobile protocols. Finally,
knowing which and how many remote hosts users connect to also
helps when modeling network connectivity.
We find that the most popular applications are web-browsing
and session applications such as ssh and telnet. These two classes
of applications are frequently run together, so some optimization
of their interaction might be useful. About half our users
frequently execute chat-oriented applications (such as talk, icq,
irc, and zephyr), showing that some users exploit the mobile
network for synchronous communication with others. We also
find that user application mixes can be classified into several
patterns, such as the terminal pattern, wherein people primarily
use their laptops to keep sessions to external machines open, or
Table
5: Description of the eleven application mixes and
number of users per application mix. The only applications
considered are web, session, X, mail, ftp, and chat. The starred
entries are shown in more detail in Figure 11.
Name Num
Users Description
upload in the morning, download for
lunch or before going home. Mostly web
and ftp, some session and mail traffic.
Usually weekday only, occasional
weekend traffic.
web-
big web-surfer visiting lots of sites,
session to one or two sites, plus a bit of
the other applications.
rare 10 one or two single peaks, always web,
sometimes session.
dabbler* 9
fairly evenly distributed among the
applications, three users active on
weekdays only, six users active on both
weekdays and weekends.
talkies 8
fairly normal hours, weekday and
weekend, mostly web and session, but
significant chat traffic too.
terminal 7 weekday only, mostly session traffic,
some web, occasional ftp.
mail-
client 6
three users active weekdays only, three
users active weekdays and weekends,
leave their laptops overnight as mail-
clients, plus some web surfing and other
applications during the day.
late-
night 5
lots of chat, web, and ftp late at night.
More "normal" traffic (session, web)
during the day.
X-term 3 lots of X, session, and web traffic. A little
traffic from the other applications.
day-user 3 web and mail during the day, a little
session, ftp, and X traffic.
lots of ftp with some session and web
traffic, a little bit of the other
applications.
home - Hour of the Day
Application
Bytes)
r
r
r
r
r
r
web
session
mail
talk
Weekday Weekend
websurfer - Hour of the Day
Application
Bytes)
r
r
r
r
r
r
web
session
mail
talk
dabbler - Hour of the Day
Application
Bytes)
r
r
r
r
r
r
web
session
mail
talk
Figure
11: Three application mixes (home, web-surfer, and
dabbler). Each graph shows the percentage of bytes sent at
that time of day per application, for both weekdays and
weekends. Each application is split into two graphs, one for
traffic to "repeat" hosts (r), and one for traffic to
"throwaway" hosts (t). A repeat host is one the user connects
to on at least two different days. The darkness of the bar
indicates how many different hosts the user connects to. The
darker the bar, the more hosts.
the web-surfing pattern, wherein most network traffic is web
traffic to many different hosts.
5.1 Application Popularity and Mixes
Table
4 lists the most common classes of applications by
number of users and total number of packets and bytes.
Unsurprisingly, basic service applications such as dns, icmp, and
house-keeping applications such as ntp are used by everyone. The
high amount of netbios traffic indicates that almost all the users
run some version of Windows on their laptops. This reflects
our system administrators' choice to install Windows as the
default system on laptops.
Of the end-user applications, http, session applications, and
file transfer applications are the most popular (with 73, 63 and 62
users, respectively). There are several interesting points of
comparison in this data. First, 62 people use some file transfer
protocol compared to 16 people who use some remote filesystem
such as NFS [13] or AFS [8]. This disparity indicates that many
users find it necessary to transfer files to and from their laptops,
but only a few users are either willing or find it necessary to use a
distributed filesystem, perhaps due to the lack of support in
distributed filesystem servers for dynamically assigned addresses.
Also interesting is the number of people who use their
laptops as a mere terminal compared to the number of people who
run applications directly on their laptops. Specifically, only 20
users run X. In comparison, 47 users run some direct mail client
(pop, imap, eudora, smtp, etc.); 21 connect to some license server
(Matlab, Mentor Graphics, etc.), presumably to run the
application directly on their laptops; 21 connect directly to a news
server; and 38 users run some sort of chat software (talk, icq,
zephyr, irc, etc. These numbers reveal a tendency to use laptops
as stand-alone machines with connectivity, rather than mere
terminals. However, most users do still use session applications
such as ssh or telnet, showing that users still need to connect to
some other machines. Finally, over half of the users execute chat
software, indicating that some users treat their laptops in part as
personal synchronous communication devices.
In addition to overall application usage, we also look at
eleven characteristic user application mixes, shown in Table 5.
The main characteristics we consider in this categorization are the
percentage of traffic in a given period of time that can be
attributed to each application, at what time each application
dominates the user's traffic, and the number of hosts to which a
user connects. We only use a coarse-grained time characterization:
weekday versus weekend and during the day versus late at night.
We also confine the categorization to six common applications:
web-surfing, session applications, X, mail applications, file
transfer, and chat applications.
Figure
provides more detail for three of the application
mixes. The first mix is the home mix, wherein the user is active in
the morning uploading information or work from the laptop, at
lunch downloading information or work, and in the evening
downloading materials before heading home. These users
typically connect to only one or two sites which are frequently
repeated. The next mix is the web-surfer mix, wherein users
contact many different web sites (up to 3,029 distinct web sites for
one user). Many of these sites (up to 1,982 for one user) are
visited more than once by the same user. The last application mix
we focus on is the dabbler mix, in which users run all of the
application types at least once.
We derive several conclusions from this application mix
characterization. First, while at some point every possible
combination of applications is run together, the applications most
commonly run together are web and session applications. Second,
while http and ssh are the most popular applications across all
users, different users do run different mixes of applications, and
they do so at different times of the day. There is no single
application mix that fits all mobile users. Finally, not only do the
mixes vary by application and time, but also by the number of
hosts to which users connect. Some users connect to as few as six
hosts, while others connect to as many as 3,054 distinct hosts.
(The router connects to a total of 15,878 distinct hosts over the
course of the entire trace; 13,178, or 83%, of those hosts are
accessed via the web.)
5.2 Web Proxies
Given these access patterns, we can ask whether a web proxy
for caching web pages might be an effective technique in our
environment. For a rough evaluation, we looked for web sites
visited multiple times, either on different days or by different
users. Of the 13,178 hosts connected to via the web, 3,894 (30%)
are visited multiple times by more than one user, 5,318 (40%) are
visited on more than one day during the trace, and 5,359 (41%)
are visited either by more than one user or on more than one day.
These results indicate that web proxies would be at least a
partially effective technique in an environment such as ours.
5.3 Network Sessions and Lease Times
The final question we ask is how long people use the
network at a sitting. Since the wireless network is part of the
"public" subnet, users must authenticate themselves when they
want to access any host outside the subnet. The current policy is
to require users to authenticate themselves every 12 hours [1].
Twelve hours was believed to be a good balance between security
concerns and user convenience. Of the 1,243 leases handed out
over the course of the trace, 23% (272 leases) are renewed within
one second of the previous lease's expiration, 27% (310 leases)
are renewed within 15 minutes of expiration, 30% (339 leases) are
renewed within one hour of expiration, and 33% (379 leases) are
renewed within three hours of the previous lease's expiration. Of
the 69 users who authenticate themselves, 48 users authenticate
themselves again within an hour of a previous lease expiration at
least once during the traced period. Given the high percentage of
users who re-authenticate themselves very quickly, we conclude
that 12 hours is not very convenient for our users and that 24
hours might be a better balance between security and ease of use.
6. VISUALIZATION TOOLS
During the course of this analysis, we use Rivet [3] to create
interactive visualizations quickly for exploring the data. A screen
shot from one such visualization is shown in Figure 8. Visualizing
this large amount of data (78,739,933 packets) is especially useful
for gaining an overall understanding of the data and for exploring
the dataset, leveraging the human perceptual system to spot
unexpected trends. While traditional analysis tools (such as perl
scripts, gnuplot, and Excel) are useful, they require the user to
formulate questions a priori. By using an interactive visualization
to explore the data, we are able to spot unexpected trends, such as
the division of users into sub-communities and the lease times
being too short.
7. RELATED WORK
Other studies of local-area networks exist, but they tend to
have a less user-oriented focus. For example, researchers at CMU
examined their large WaveLAN installation [6]. This study
focuses on characterizing how the WaveLAN radio itself behaves,
in terms of the error model and signal characteristics given
various physical obstacles, rather than on analyzing user behavior
in the network. Other researchers also studied the campus-wide
WaveLAN installation at CMU [2]. However, this study focuses
on installing and managing a wireless network rather than on user
behavior.
Another related effort is joint work from Berkeley and CMU
[11]. The researchers outline a method for mobile system
measurement and evaluation, based on trace modulation rather
than network simulation. This work differs from our own in
several ways. First, the parameters they concentrate on deal with
latency, bandwidth, and signal strength rather than with when
users are active and which applications they run. Second, their
emphasis is on using these traces to analyze new mobile systems,
rather than on understanding the current system. In this paper, our
goal is to understand how people use an existing mobile system.
We previously studied a metropolitan-area network [14], but
focused more on user movement than on user traffic in that
analysis. Also, that network had very different characteristics,
including number of users, geographical size, network delay and
bandwidth, than the network analyzed in this paper.
Also at Stanford University, our research group performed an
earlier study of a combined wireless and wired network [10].
However, this study was limited in that only eight users
participated and the trace only lasted eight days.
8. FUTURE WORK
The greatest weakness in our work is its possible specificity:
our results only necessarily apply to our network and user
community. While we believe many of our observations would
hold true in other similar environments, we have not verified this.
We would thus like to study other local-area networks, including a
much larger building-wide or even campus-wide WaveLAN
network to explore whether our conclusions are affected by scale.
With a larger network, it might also make sense to look for
geographical patterns of user mobility as people go to classes,
offices, lunch, and so forth. We also wonder whether our results
are specific to an academic environment and would like to
perform a similar study in a corporate or commercial setting. Only
through the collection of several different studies can we detect
important trends that hold for many wireless environments.
9. CONCLUSION
Although these results are specific to this WaveLAN wireless
network and this university user community, we hope our analysis
is a start on understanding how people exploit a mobile network.
We find that the community we analyze can be broken down into
subcommunities, each with its own unique behavior regarding
how much users move, when users are active (daily, weekly, and
over the course of the trace), and how much traffic the users
generate. We also find that although web-surfing and session
applications such as ssh and telnet are the most popular
applications overall, different users do use different sets of
applications at different times and connect to different numbers of
hosts. In addition to this user behavior, we also find that
asymmetric links would likely be unacceptable in this type of
wireless network, and that optimizing packet processing is just as
important as optimizing overall throughput.
The trace data we have collected is publicly available on our
web site:
10.
ACKNOWLEDGMENTS
This research has been supported by a gift from NTT Mobile
Communications Network, Inc. (NTT DoCoMo). Additionally,
Diane Tang is supported by a National Physical Science
Consortium Fellowship.
11.
--R
Experience Building a High Speed
Rivet: A Flexible Environment for Computer Systems Visualization.
Simple Network Management Protocol (SNMP).
Dynamic Host Configuration Protocol (DHCP).
Measurement and Analysis of the Error Characteristics of an In-Building Wireless Network
Integrating Information Appliances into an Interactive Workspace.
Scale and Performance in a Distributed File System.
Available via anonymous ftp to ftp.
Experiences with a Mobile Testbed.
Design and Implementation of the Sun Network Filesystem.
Analysis of a Metropolitan-Area Wireless Network
--TR
Scale and performance in a distributed file system
Measurement and analysis of the error characteristics of an in-building wireless network
Andrew
Trace-based mobile network emulation
End-to-end Internet packet dynamics
Analysis of a metropolitan-area wireless network
Rivet
Integrating Information Appliances into an Interactive Workspace
Experiences with a Mobile Testbed
--CTR
Daniel B. Faria , David R. Cheriton, MobiCom poster: public-key-based secure Internet access, ACM SIGMOBILE Mobile Computing and Communications Review, v.7 n.1, January
Ravi Jain , Dan Lelescu , Mahadevan Balakrishnan, Model T: an empirical model for user registration patterns in a campus wireless LAN, Proceedings of the 11th annual international conference on Mobile computing and networking, August 28-September 02, 2005, Cologne, Germany
Maya Rodrig , Charles Reis , Ratul Mahajan , David Wetherall , John Zahorjan, Measurement-based characterization of 802.11 in a hotspot setting, Proceeding of the 2005 ACM SIGCOMM workshop on Experimental approaches to wireless network design and analysis, August 22-22, 2005, Philadelphia, Pennsylvania, USA
Ravi Jain , Dan Lelescu , Mahadevan Balakrishnan, Model T: a model for user registration patterns based on campus WLAN data, Wireless Networks, v.13 n.6, p.711-735, December 2007
Jihwang Yeo , Moustafa Youssef , Tristan Henderson , Ashok Agrawala, An accurate technique for measuring the wireless side of wireless networks, Papers presented at the 2005 workshop on Wireless traffic measurements and modeling, p.13-18, June 05-05, 2005, Seattle, Washington
Xavier Prez-Costa , Marc Torrent-Moreno , Hannes Hartenstein, A performance comparison of Mobile IPv6, Hierarchical Mobile IPv6, fast handovers for Mobile IPv6 and their combination, ACM SIGMOBILE Mobile Computing and Communications Review, v.7 n.4, October
Everett Anderson , Kevin Eustice , Shane Markstrum , Mark Hansen , Peter Reiher, Mobile Contagion: Simulation of Infection and Defense, Proceedings of the 19th Workshop on Principles of Advanced and Distributed Simulation, p.80-87, June 01-03, 2005
Minkyong Kim , David Kotz, Periodic properties of user mobility and access-point popularity, Personal and Ubiquitous Computing, v.11 n.6, p.465-479, August 2007
Dan Lelescu , Ula C. Kozat , Ravi Jain , Mahadevan Balakrishnan, Model T++:: an empirical joint space-time registration model, Proceedings of the seventh ACM international symposium on Mobile ad hoc networking and computing, May 22-25, 2006, Florence, Italy
Guangwei Bai , Kehinde Oladosu , Carey Williamson, Performance benchmarking of wireless Web servers, Ad Hoc Networks, v.5 n.3, p.392-412, April, 2007
David Kotz , Kobby Essien, Analysis of a campus-wide wireless network, Proceedings of the 8th annual international conference on Mobile computing and networking, September 23-28, 2002, Atlanta, Georgia, USA
scheduling in IEEE 802.11n WLAN, Proceedings of the 6th Conference on WSEAS International Conference on Applied Computer Science, p.121-126, April 15-17, 2007, Hangzhou, China
Haining Liu , Magda El Zarki, Adaptive Delay and Synchronization Control for Wi-Fi Based Mobile AV Conferencing, Wireless Personal Communications: An International Journal, v.34 n.1-2, p.143-162, July 2005
Raffaele Bruno , Marco Conti , Enrico Gregori, Design of an enhanced access point to optimize TCP performance in Wi-Fi hotspot networks, Wireless Networks, v.13 n.2, p.259-274, April 2007
Haining Liu , Magda El Zarki, An adaptive delay and synchronization control scheme for Wi-Fi based audio/video conferencing, Wireless Networks, v.12 n.4, p.511-522, July 2006
Xiaoqiao (George) Meng , Starsky H. Y. Wong , Yuan Yuan , Songwu Lu, Characterizing flows in large wireless data networks, Proceedings of the 10th annual international conference on Mobile computing and networking, September 26-October 01, 2004, Philadelphia, PA, USA
Magdalena Balazinska , Paul Castro, Characterizing mobility and network usage in a corporate wireless local-area network, Proceedings of the 1st international conference on Mobile systems, applications and services, p.303-316, May 05-08, 2003, San Francisco, California
Seongkwan Kim , Se-kyu Park , Sunghyun Choi , Jaehwan Lee , Hanwook Jung, Management and Diagnosis Architecture for a Large-Scale Public WLAN, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.301-307, June 26-29, 2006
David Kotz , Kobby Essien, Analysis of a campus-wide wireless network, Wireless Networks, v.11 n.1-2, p.115-133, January 2005
Joy Ghosh , Matthew J. Beal , Hung Q. Ngo , Chunming Qiao, On profiling mobility and predicting locations of wireless users, Proceedings of the second international workshop on Multi-hop ad hoc networks: from theory to reality, May 26-26, 2006, Florence, Italy
Chris Stolte , Diane Tang , Pat Hanrahan, Multiscale Visualization Using Data Cubes "InfoVis 2002 Best Paper", Proceedings of the IEEE Symposium on Information Visualization (InfoVis'02), p.7, October 28-29, 2002
Marvin McNett , Geoffrey M. Voelker, Access and mobility of wireless PDA users, ACM SIGMOBILE Mobile Computing and Communications Review, v.9 n.2, April 2005
Jong-Kwon Lee , Jennifer C. Hou, Modeling steady-state and transient behaviors of user mobility:: formulation, analysis, and application, Proceedings of the seventh ACM international symposium on Mobile ad hoc networking and computing, May 22-25, 2006, Florence, Italy
Atul Adya , Paramvir Bahl , Lili Qiu, Characterizing Alert and Browse Services of Mobile Clients, Proceedings of the General Track: 2002 USENIX Annual Technical Conference, p.343-356, June 10-15, 2002
Anand Balachandran , Geoffrey M. Voelker , Paramvir Bahl , P. Venkat Rangan, Characterizing user behavior and network performance in a public wireless LAN, ACM SIGMETRICS Performance Evaluation Review, v.30 n.1, June 2002
Camden C. Ho , Krishna N. Ramachandran , Kevin C. Almeroth , Elizabeth M. Belding-Royer, A scalable framework for wireless network monitoring, Proceedings of the 2nd ACM international workshop on Wireless mobile applications and services on WLAN hotspots, October 01-01, 2004, Philadelphia, PA, USA
Tristan Henderson , David Kotz , Ilya Abyzov, The changing usage of a mature campus-wide wireless network, Proceedings of the 10th annual international conference on Mobile computing and networking, September 26-October 01, 2004, Philadelphia, PA, USA
George Alyfantis , Stathes Hadjiefthymiades , Lazaros Merakos, An overlay smart spaces system for load balancing in wireless LANs, Mobile Networks and Applications, v.11 n.2, p.241-251, April 2006
Jerry Zhao , Ramesh Govindan, Understanding packet delivery performance in dense wireless sensor networks, Proceedings of the 1st international conference on Embedded networked sensor systems, November 05-07, 2003, Los Angeles, California, USA
Jihwang Yeo , Moustafa Youssef , Ashok Agrawala, A framework for wireless LAN monitoring and its applications, Proceedings of the 2004 ACM workshop on Wireless security, October 01-01, 2004, Philadelphia, PA, USA
Jakob Eriksson , Sharad Agarwal , Paramvir Bahl , Jitendra Padhye, Feasibility study of mesh networks for all-wireless offices, Proceedings of the 4th international conference on Mobile systems, applications and services, June 19-22, 2006, Uppsala, Sweden
Chris Stolte , Diane Tang , Pat Hanrahan, Query, analysis, and visualization of hierarchically structured data using Polaris, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Tom Goff , Nael B. Abu-Ghazaleh , Dhananjay S. Phatak , Ridvan Kahvecioglu, Preemptive routing in Ad Hoc networks, Proceedings of the 7th annual international conference on Mobile computing and networking, p.43-52, July 2001, Rome, Italy
Yu-Chung Cheng , John Bellardo , Pter Benk , Alex C. Snoeren , Geoffrey M. Voelker , Stefan Savage, Jigsaw: solving the puzzle of enterprise 802.11 analysis, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Paramvir Bahl , Ranveer Chandra , Jitendra Padhye , Lenin Ravindranath , Manpreet Singh , Alec Wolman , Brian Zill, Enhancing the security of corporate Wi-Fi ntworks using DAIR, Proceedings of the 4th international conference on Mobile systems, applications and services, June 19-22, 2006, Uppsala, Sweden
Daniel B. Faria , David R. Cheriton, DoS and authentication in wireless public access networks, Proceedings of the 3rd ACM workshop on Wireless security, p.47-56, September 28-28, 2002, Atlanta, GA, USA
Maxim Raya , Jean-Pierre Hubaux , Imad Aad, DOMINO: a system to detect greedy behavior in IEEE 802.11 hotspots, Proceedings of the 2nd international conference on Mobile systems, applications, and services, June 06-09, 2004, Boston, MA, USA
Chris Stolte , Diane Tang , Pat Hanrahan, Multiscale Visualization Using Data Cubes, IEEE Transactions on Visualization and Computer Graphics, v.9 n.2, p.176-187, April
Mahajan , Maya Rodrig , David Wetherall , John Zahorjan, Analyzing the MAC-level behavior of wireless networks in the wild, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Giovanni Resta , Paolo Santi, The QoS-RWP mobility and user behavior model for public area wireless networks, Proceedings of the 9th ACM international symposium on Modeling analysis and simulation of wireless and mobile systems, October 02-06, 2006, Terromolinos, Spain
Qunwei Zheng , Xiaoyan Hong , Sibabrata Ray, Recent advances in mobility modeling for mobile ad hoc network research, Proceedings of the 42nd annual Southeast regional conference, April 02-03, 2004, Huntsville, Alabama
Ben Liang , Zygmunt J. Haas, Predictive distance-based mobility management for multidimensional PCS networks, IEEE/ACM Transactions on Networking (TON), v.11 n.5, p.718-732, October
Sunwoong Choi , Kihong Park , Chong-kwon Kim, On the performance characteristics of WLANs: revisited, ACM SIGMETRICS Performance Evaluation Review, v.33 n.1, June 2005
Tom Goff , Nael Abu-Ghazaleh , Dhananjay Phatak , Ridvan Kahvecioglu, Preemptive routing in ad hoc networks, Journal of Parallel and Distributed Computing, v.63 n.2, p.123-140, February
Albert M. Lai , Jason Nieh , Bhagyashree Bohra , Vijayarka Nandikonda , Abhishek P. Surana , Suchita Varshneya, Improving web browsing performance on wireless pdas using thin-client computing, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Chris Stolte , Diane Tang , Pat Hanrahan, Polaris: A System for Query, Analysis, and Visualization of Multidimensional Relational Databases, IEEE Transactions on Visualization and Computer Graphics, v.8 n.1, p.52-65, January 2002
S. Jae Yang , Jason Nieh , Shilpa Krishnappa , Aparna Mohla , Mahdi Sajjadpour, Web browsing performance of wireless thin-client computing, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
N. Blefari-Melazzi , D. Di Sorte , M. Femminella , G. Reali, Autonomic control and personalization of a wireless access network, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.10, p.2645-2676, July, 2007
Janise McNair , Tuna Tugcu , Wenye Wang , Jiang Xie, A survey of cross-layer performance enhancements for mobile IP networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.49 n.2, p.119-146, 5 October 2005 | local-area wireless networks;network analysis |
346037 | Reducing virtual call overheads in a Java VM just-in-time compiler. | Java, an object-oriented language, uses virtual methods to support the extension and reuse of classes. Unfortunately, virtual method calls affect performance and thus require an efficient implementation, especially when just-in-time (JIT) compilation is done. Inline caches and type feedback are solutions used by compilers for dynamically-typed object-oriented languages such as SELF [1, 2, 3], where virtual call overheads are much more critical to performance than in Java. With an inline cache, a virtual call that would otherwise have been translated into an indirect jump with two loads is translated into a simpler direct jump with a single compare. With type feedback combined with adaptive compilation, virtual methods can be inlined using checking code which verifies if the target method is equal to the inlined one.This paper evaluates the performance impact of these techniques in an actual Java virtual machine, which is our new open source Java VM JIT compiler called LaTTe [4]. We also discuss the engineering issues in implementing these techniques.Our experimental results with the SPECjvm98 benchhmarks indicate that while monomoprhic inline caches and polymorphic inline caches achieve a speedup as much as a geometric mean of 3% and 9% respectively, type feedback cannot improve further over polymorphic inline caches and even degrades the performance for some programs. | Introduction
Java is a recently created object-oriented programming
language [5]. As an object-oriented programming
language, it supports virtual methods, which allow different
code to be executed for objects of dierent types
with the same call.
Virtual method calls in Java incur a performance
penalty because the target of these calls can only be
determined at run-time based on the actual type of
objects, requiring run-time type resolution. For exam-
ple, extra code needs to be generated by a just-in-time
(JIT) compiler such that in many Java JIT compilers
like Kae [6], CACAO [7], and LaTTe [8], a virtual
method call is translated into a sequence of loads followed
by an indirect jump rather than a direct jump
as for other static method calls.
In dynamically-typed object-oriented languages
such as SELF, however, virtual calls cannot be implemented
by using simple sequences of loads followed by
an indirect jump like in Java [1]. Furthermore, virtual
calls are much more frequent than in Java. So, two
aggressive techniques have been employed to reduce
virtual call overheads: inline caches and type feedback.
With these techniques, a virtual method call can be
translated into a simpler sequence of compare then direct
jump or can even be inlined with type checking
code. Although both techniques are certainly applicable
to Java, little is known about their performance
impact. Since virtual method calls are less frequent and
less costly in Java while both techniques involve additional
translation overhead, it is important to evaluate
these technques separately since the results from SELF
may not apply.
This paper evaluates both techniques in an actual
Java JIT compiler. The compiler is included in our
open source Java virtual machine called
Although the implementation of both techniques in
was straightforward, there were a few trade-
os and optimization opportunities which we want to
discuss in this paper. We also provide detailed analysis
of their performance impact on Java programs in the
The rest of the paper is organized as follows. Chapter
y reviews method calls in Java and summarizes
the virtual method call mechanism used by the
JVM. Chapter 3 describes how we implemented
inline caches and type feedback in LaTTe. Chapter 4
shows experimental results. Related work is described
in section 5 and the summary follows in section 6.
Background
2.1 Method invocation in Java
The Java programming language provides two types
of methods: instance methods and class methods [9]. A
class method is invoked based on the class it is declared
in via invokestatic. Because it is bound statically,
the JIT compiler knows which method will be invoked
at compile time.
An instance method, on the other hand, is always
invoked with respect to an object, which is sometimes
called a receiver, via invokevirtual. Because the actual
type of the object is known only at run-time (i.e.,
bound dynamically), the JIT compiler cannot generally
determine its target at compile time. There are
some instance methods that can be bound statically,
though. Examples are nal methods, private meth-
ods, all methods in nal classes, and instance methods
called through the invokespecial bytecode (e.g., instance
methods for special handling of superclass, pri-
vate, and instance initialization [9]).
Generally, a method invocation incurs overheads
such as creating a new activation record, passing ar-
guments, and so on. In the case of dynamic binding,
there is the additional overhead of nding the target
method (which is called the method dispatching over-
head).
2.2 LaTTe JIT Compiler and Virtual Method Table
LaTTe is a virtual machine which is able to execute
Java bytecode. It includes a novel JIT compiler
targeted to RISC machines (specically the Ul-
traSPARC). The JIT compiler generates code with
good quality through a clever mapping of Java stack
operands to registers with negligible overhead [8]. It
also performs some traditional optimizations such as
common subexpression elimination or loop invariant
code motion. Additionally, the runtime components
of LaTTe, including thread synchronization [10], exception
handling [11], and garbage collection [12], have
been optimized. As a result, the performance of LaTTe
is competitive to that of Sun's HotSpot [13] and Sun's
JDK 1.2 production release [14].
maintains a virtual method table (VMT) for
each loaded class. The table contains the start address
of each method dened in the class or inherited from
the superclass. Due to the use of single inheritance in
Java, if the start address of a method is placed at oset
n in the virtual method table of a class, it can also be
placed at oset n in the virtual method tables of all
subclasses of the class. Consequently, the oset n is a
translation-time constant. Since each object includes a
pointer to the method table of its corresponding class,
a virtual method invocation can be translated into an
indirect function call after two loads: load the virtual
table, indexing into the table to obtain the start ad-
dress, and then the indirect call.
For statically-bound method calls, LaTTe generates
a direct jump at the call site, or inlines the target
method unless the bytecode size is huge (where the
invocation overhead would be negligible) or the inlining
depth is large (to prevent recursive calls from being
inlined innitely).
3 Inline Caches and Type Feedback
In this section, we review the techniques of inline
caches and type feedback and describe our implemen-
tations. We use the example class hierarchy in Figure 1
throughout this section. Both classes B and C are sub-classes
of A and have an additional eld as well as the
one inherited from class A. Class B inherits the method
GetField1() from class A, and class C overrides it. All
assembly code in this section are in SPARC assembly.
class A {
int field1;
int GetField1() { return field1; }
class B extends A {
int field2;
int GetField2() { return field2; }
class C extends A {
int field3;
int GetField1() { return 0; }
int GetField3() { return field3; }
Figure
1. Example class hierachy
3.1 Inline Caches
3.1.1 Monomorphic Inline Caches
When a JIT compiler translates obj.GetField1()
in
Figure
2(a), it cannot know which version of
GetField1() will actually be called because obj can be
an object of class C as well as class A or B. Even if class C
does not exist, the JIT compiler cannot be sure whether
if A's GetField1() should be called because class C can
be dynamically loaded later. With a VMT this call is
translated into a sequence of load-load-indirect jump,
as shown in Figure 2(b), where #index means the oset
in the VMT (# denotes a translation-time constant).
void dummy() { // o0 contains object
A obj; ld [%o0], %g1
. // indexing the VMT
ld [%g1+#index], %g1
(a) (b)
Figure
2. Example virtual call in Java and corresponding
VMT sequences
The inline cache is a totally dierent method dispatching
mechanism which \inlines" the address of the
last dispatched method at a call site. Figure 3(a)
shows the translated code using a monomorphic inline
cache (MIC), where the call site just jumps to a system
lookup routine called method dispatcher via a stub.
This stub code sets the register %g1 to #index, and thus
method dispatcher can determine the called method.
It is called an empty inline cache because there is no
history for the target method yet. When the call is
executed for the rst time, method dispatcher nds
the target method based on the type of the receiver,
translates it if it has not been translated yet, and updates
the call site to point to the translated method,
which is prepended by the type checking code 1 . Figure
3(b) shows the state of the inline cache when the
rst encountered receiver is an object of class A. Now,
our inline cache includes history for one method invocation
to the target method. Detailed type checking
code is shown in gure 4.
call trampoline_code
mov %o7, %g2
mov #indix, %g1
trampoline code
translated method
method_dispatcher
call dispatcher
translated method
fail_handler
type-checking code
bne fail_handler
cmp %g2, #A.VMT
ld [%o0], %g2
(a) empty inline cache
(b) monomorphic inline cache
call A.GetField1
Figure
3. Monomorphic inline caches
Until a receiver with a dierent type is encoun-
tered, the state of the inline cache does not change. If
such a receiver is encountered, fail handler operates
just like method dispatcher: nd the target method,
translate it if it has not been translated yet, and update
the call site.
3.1.2 Polymorphic Inline Caches
A polymorphic inline cache (PIC) diers from a MIC
in dealing with the failure of type checking. Instead of
updating the call site repeatedly, it creates a PIC stub
code, and makes the call site point to this stub code.
The PIC stub code is composed of a sequence of com-
pare, branch, and direct jump instructions where all
previously encountered receiver types and corresponding
method addresses are inlined. Figure 5(a) shows
the status of a call site and the corresponding PIC stub
code when the call site encounters objects of class A
and class C. The detailed PIC stub code is as shown in
Figure
6.
it is possible that a method has multiple type checking
codes due to inheritance, a type checking code can be separated
from the corresponding method body.
/* A.VMT(VMT pointer of class A) is a translation-time constant. */
sethi %hi(#A.VMT), %g3 // load 32bit constant value
or %g3, %lo(#A.VMT), %g3 //
cmp %g2, %g3 // compare two VMT pointers
bne FAIL_HANDLER // branch to fail_handler code
// if two VMTs are equal
mov %o7, %g2 // delay slot instruction
checking code can be located in front of method body,
this code is not required */
JUMP_TO_TARGET:
call address of A.GetField1
mov %g2, %o7 // to prevent from returning back here
/* In our implementation FAIL_HANDLER is located in front of above code */
FAIL_HANDLER:
/* index of the called method in VMT is translation-time constant */
call fail_handler // call fail_handler
mov #index, %g1 // delay slot instructoin
// index value is passed to fixup function
// via g1 register
Figure
4. Detailed type checking code
translated method
call PIC-stub
PIC stub code
ld [%o0], %g1
ld [%g1+#index],%g1
nop
(a) polymorphic inline cache
translated method
call VMT code
VMT code
(b) handling megamorphic sites
Figure
5. Polymorphic inline caches
It is not practical for the PIC stub code to grow
without limit. If the number of entries in a PIC stub
code exceeds a pre-determined value, the corresponding
call site is called a megamorphic site, and we use VMT-
style code instead. Since this code only depends on the
index value in the VMT, it can be shared among many
call sites. Figure 5(b) explains how megamorphic sites
are handled. Although MICs are used in SELF for
megamorphic sites, this is only because the VMT-style
mechanism cannot be used in SELF, and we think that
VMT-style code is more appropriate for megamorphic
sites than MICs since the latter may cause frequent
updates of the call site, with frequent I-cache
ushes
as a result.
There are several variations of PICs. If space is
tight, the PIC stub can be shared among identical call
sites 2 . This type of PIC is called a shared PIC, while
the former type is called a non-shared PIC when the
distinction is required.
The PIC stub code can contain counting code for
each type test hit and can be reordered based on the
frequency of them to reduce the number of type tests
needed to nd the target. If the reordering is performed
only once, and then a PIC stub which had been re-ordered
but without counting code is used, it is called
a counting PIC. It is also possible that the reordering
is performed periodically and PIC stubs always have
2 If the possible sets of target methods are the same, we call
these call sites identical.
mov %o7, %g1 // save return address in g1 register
/* A.VMT(VMT pointer of class A) is a translation-time constant. */
sethi %hi(#A.VMT), %g3 // load 32bit constant value
or %g3, %lo(#A.VMT), %g3
cmp %g2, %g3 // compare two VMT pointers
bne next1 //
nop // delay slot instruction
call address of A.GetField1 // jump to A.GetField1 if two VMTs are equal
mov %g1, %o7 // set correct return address
/* C.VMT(VMT pointer of class C) is a translation-time constant. */
next1:
or %g3, %lo(#C.VMT), %g3
cmp %g2, %g3
bne next2
nop
call address of C.GetField1
mov %g1, %o7
next2:
call fixupFailedCheckFromPIC // call fixup function
nop
Figure
6. Detailed PIC stub code
counting code, and this type of PIC is called a periodic
PIC.
3.1.3 VMT vs. Inline Caches
Inline caches are favored over VMTs for two reasons.
First, the VMT mechanism requires an indirect jump,
which is not easily scheduled by modern superscalar
microprocessors 3 [15, 16]. Whereas inline caches can
be faster in modern microprocessors which do branch
prediction. Second, VMTs do not provide any information
about call sites. With inline caches, we can get
information about the receivers which has been encoun-
tered, though MICs can only give the last one. This
information can be used for other optimizations, such
as method inlining.
3.2 Type Feedback
Although inline caches can reduce the method dispatch
overhead at virtual call sites, the call overhead
3 The cost of an indirect jump is higher in the UltraSparc due
to the lack of a BTB (Branch Target Buer).
itself still remains. In order to reduce the call overhead,
we need to inline the method.
The idea of type feedback [3] is to extract type information
of virtual calls from previous runs and feed it
back to the compiler for optimization. With type feed-
back, a virtual call can be inlined with guards which
veries if the target method is equal to the inlined
method (we call this conditional inlining).
3.2.1 Framework of Type Feedback
In our implementation, type feedback is based on PICs,
since it can provide more accurate information for call
sites than MICs or VMTs. Type feedback also requires
an adaptive compilation framework, and has
been implemented on an adaptive version of LaTTe,
which selects methods to aggressively optimize based
on method run counts. When a method is called for the
rst time, it is translated with register allocation and
traditional optimizations 4 while virtual method calls
4 We optimize even during the initial translation to isolate
the performance impact of inlining for a fair comparison with
the other congurations in our experiments; See section 4.1
within it are handled by PICs. If the number of times
this method is called exceeds a certain threshold, it is
retranslated with conditional inlining also being done.
3.2.2 Conditional Inlining
The compiler decides whether a call site is inlined or
not based on the status of inline caches. For example,
if the call site in gure 3(b) remains as a monomorphic
site, at retranslation time it will be inlined with type
checking code as follows:
if (obj.VMT == #A.VMT)
else
If the call site points to a PIC stub code, but there is
only one target method in the stub code, then we can
do conditional inlining, except this time the comparison
is based on addresses, not on receiver types. For
example, if our PIC stub code in Figure 5(a) were composed
of type checks for class A and class B (not class A
and class C), the addresses of both GetField1s will be
identical. So, we can inline the method, but the type
check should be replaced by an address check, which
includes access to the VMT (two loads), as follows:
if (obj.VTM[#index of method GetField1]
== #address of A.GetField1) // load-load
else
If the frequency information of each type or method
is available by using a counting PIC, we can improve
on the all-or-nothing strategy. Even though there are
multiple receivers or multiple target methods, we can
inline the call site with a type test or an address test if
one case is dominant among the other cases in the PIC
stub. Currently, the criteria value to decide whether a
case is dominant or not is 80%: If the count of type
test hits in a PIC stub exceeds 80% of the total count
of PIC stub, it is inlined with type checking code.
3.2.3 Static Type Prediction
For those call sites located at untaken execution paths
during initial runs, we do not have any information on
the most probable receiver type (but these can be collected
even after retranslation). However, if the class of
the object on which a virtual call is made has no sub-class
at translation time, we can easily predict that the
receiver type would be the class at runtime. Althouth
Java allows dynamic class loading, we found that this
prediction is quite accurate for most programs. For
the following case, for example, we can inline the call
site even if there is no information in the inline cache
during retranslation.
if () {
3.2.4 Inlining Heuristic: single vs. multiple
In previous section, we inlined only a method for a
virtual call site. It can be possible that a call site has
more than two target methods, and neither of them are
not dominant. In such a case, we might lose inlining
opportunities by restricting the number of inlineable
methods for a call site. However, we found that this is
not the case for Java programs which we use for testing.
In the programs, most (95%) virtual call sites call just
a single callee, and enabling to inline multiple methods
for a call site does not increase the number of inlined
virtual calls signicantly.
4 Experimental results
In this section, we evaluate the performance impact
of inline caches and type feedback.
4.1 Experimental Environment
Our benchmarks are composed of the SPECjvm98
benchmark suite 5 [17], and table 1 shows the list of
programs and a short description for each.
Benchmark Description Bytes
compress Compress Utility 24326
jess Expert Shell System 45392
213 javac Java Compiler 92000
222 mpegaudio MP3 Decompressor 38930
228 jack Parser Generator 51380
Table
1. Java Benchmark Description and
Translated Bytecode Size
Table
2 lists the congurations used in our experi-
ments. LaTTe-VMT, LaTTe-MIC, and LaTTe-PIC are
all the same except in how they handle virtual calls: by
using VMTs, MICs and PICs respectively. LaTTe-TF
inlines virtual calls using type feedback on an adaptive
version of LaTTe, where initial translation is identical
to LaTTe-PIC, as described in Section 3.2. Variations
in PICs are denoted with each variation surrounded
5 200 check is excluded since it is for correctness testing only.
by brackets. For example, a shared PIC is denoted by
PIC[S], a counting PIC by PIC[C], and a periodic PIC
by PIC[P]. The default version of a PIC is denoted by
PIC[] when the distinction is needed.
System Description
Virtual calls are handled by VMT.
Virtual calls are handled by MIC.
Virtual calls are handled by PIC.
Virtual calls are inlined using type
feedback at the retranslation time.
Table
2. Systems used for benchmarking
Our test machine is a Sun Ultra5 270MHz with 256
MB of memory running Solaris 2.6, tested in single-user
mode. We ran each benchmark 5 times and took
the minimum running time 6 , which includes both the
JIT compilation overhead and the garbage collection
time.
4.2 Characteristics of Virtual Calls
Table
3 shows the characteristics of virtual calls.
In the table, V-Call means the total count of virtual
calls, M-Call means the total count of monomorphic
calls, and S-target means the total count of virtual calls
where target method is just one at runtime. About 85%
of virtual calls are monomophic calls, and about 90% of
virtual calls have only one target method. Some programs
like 213 javac, 227 mtrt, and 228 jack have
many call sites where the target method is one, even
though there are multiple receiver tyeps, and thus we
can expect that address check inlining may be eec-
tive on these programs. We can also expect that
compress will not be much aected by how virtual
calls are implemented, since the number of virtual
calls is extremely small.
4.3 Analysis of Monomorphic Inline Caches
Table
4 shows the characteristics of MICs. In the
table, V-Call means the total count of virtual calls,
P-Call means the total count of calls which are called
at polymorphic call sites, and Miss means the total
count of type check misses. There is no trend in the
miss ratios. While some programs such as 209 db and
228 jack have very low miss ratios compared to V-
Call, type check misses are very common in 202 jess
and 213 javac. Since a type check miss requires invalidating
part of the I-cache, the miss ratio can greatly
6 For the SPECjvm98 benchmarks, our total elapsed running
time is not comparable with a SPECjvm98 metric.
aect the overall performance. So we can expect that
the performance of 202 jess and 213 javac may be
worse with monomorphic inline caches.
4.4 Analysis of Polymorphic Inline Caches
Tables
5, 6, 7, and 8 shows the average number of
type checks in a PIC stub for each conguration of
PICs. Each column of the tables is divided by the
maximum number of possible entries in a PIC stub and
the threshold value which determines when reordering
takes place. This value is applied to both PIC[C] and
PIC[P], although the reordering takes place just once in
the former while it takes place periodically in the latter.
If the threshold value is zero, it means no counting.
At rst glance, we nd that the numbers of average
type checkings are very small, even though monomorphic
sites are not included in the numbers. If the average
number is calculated for every inline cache, including
monomorphic sites, the number will be even more
closer to 1. However, only if counting is enabled are
the numbers are less than 2 in every case.
From table 5 and 6, it is clear that counting PICs
are eective in reducing the number of type checks in
a PIC stub, except for 228 jack. Although the numbers
are generally reduced as the threshold value is
increased, they are unchanged or even increased for
some programs and seem to saturate at certain values.
simply increasing the threshold does not guarantee
improvements. 228 jack has very dierent characteristics
from other programs, and these come from
a single polymorphic site 7 which accounts for about a
half of the total polymorphic calls, and exhibit very
strange behavior: after the call site is switched to a
polymorphic inline cache from a monomorphic inline
cache, the newly encountered type is received repetitively
for about a thousand times, and thereafter the
former type is used repetitively for over 1 million times.
So the default PIC scheme without counting is better
than that with counting in this case.
We can also see the eect of inaccuracies caused by
sharing PIC stubs from table 5 and 6. Although it
seems natural that a non-shared version would be more
accurate than a shared version, the dierence is not
apparent in non-counting versions. For other congu-
rations, the numbers in table 5 are better than those in
table 6, except for 213 javac, where some PIC stubs
are changed into VMT-style code because sharing increases
the number of entries. However, the dierence
is lower than 0.3 for most cases.
7 A call site in \indexOf" method in java.util.Vector class
which calls \equals" method.
Benchmark V-Call M-Call S-target M-Call/V-Call S-target/V-Call
compress 12.9 11.6 11.8 0.897 0.912
jess 34,306 27,718 28,435 0.808 0.829
222 mpegaudio 10,025 8,781 8,841 0.876 0.882
228 jack 17,247 14,094 16,959 0.817 0.983
GEOMEAN 0.854 0.904
Table
3. Characteristics of virtual calls
Benchmark V-Call(1000) P-Call(1000) Miss(1000) Miss/V-Call Miss/P-Call
compress 12.9 1.3 0.58 0.045 0.433
jess 34,306 6,587 3201 0.093 0.486
222 mpegaudio 10,025 1,244 54 0.005 0.044
Table
4. Characteristics of monomorphic inline caches
Tables
7 and 8 are shown for comparison with tables
5 and 6. Although a periodically reordered PIC is
hard to use in real implementations because it always
incurs counting overhead which involves load-add-store
sequences, it can be seen as a somewhat ideal cong-
uration in terms of the number of type checks. The
dierence between the periodic version and the non-periodic
version is lower than 0.2 in most programs,
and thus the quality of counting PICs are quite acceptable
Table
9 shows the space overhead of PICs for both
non-shared and shared versions. In the table, N means
the number of PIC stubs, and max means the possible
maximum number of entries in each PIC stub. The
overhead seems to be small in most programs except for
213 javac, where the shared version can greatly reduce
the overhead. Since the sharing of PIC stubs does not
degrade the performance severely for most programs,
shared PICs can be useful when space is tight.
4.5 Analysis of Type Feedback
Tables
10, 11, and 12 show the eect of type feed-back
in terms of the number of inlined virtual calls. As
a base system, four dierent PIC variations (PIC[S],
PIC[SC], PIC[], and PIC[C]) are used. The main purpose
of PICs is providing prole information. So a
larger number of maximum entries in PIC (10) is used,
and the threshold for counting PICs is set to 1000 in
order not to aect the accuracy too much 8
Generally, the number of inlined virtual calls is reduced
as the retranslation threshold is increased. Although
more accurate prole information is available
with a high retranslation threshold than with a low re-
translation threshold, the opportunities missed by delaying
retranslation seems to be high. In addition, the
number of inlined calls for many programs is constant
regardless of the type of PICs. There can be two rea-
sons: One possibility is that the method is too big to be
inlined. In this case, inlining such a method is related
to the inlining heuristic and is beyond the scope of this
paper. The other possiblility is that a single method is
not dominant for a call site. However, this is not the
case for SPECjvm98 benchmarks, and it will be shown
in following section.
For some programs like 213 javac and 227 mtrt,
which are aected by the type of PICs used, the counting
version is preferable to the non-counting version. A
call site which has multiple receiver types and thus can
be inlined only with an address check, can be inlined
with a type check if counting information is available
and only one receiver type is dominant. And a call
site which involves more than two target methods and
thus cannot be inlined in our implementation, can be
inlined if one method is dominant. While 213 javac
is in
uenced by both eects, i.e., the amount of type
check inlining and address check inlining are increased,
8 Since a PIC is reused for a call site where method inlining is
not done, the value should not be too large.
Benchmark PIC
compress 1.540 1.480 1.498 1.526 1.540 1.540 1.480 1.498 1.526 1.540
jess 1.517 1.488 1.507 1.486 1.486 1.517 1.488 1.507 1.486 1.486
222 mpegaudio 2.120 1.359 1.267 1.266 1.269 2.120 1.359 1.267 1.266 1.269
228 jack 1.209 1.800 1.800 1.800 1.800 1.209 1.800 1.800 1.800 1.800
Table
5. Average number of type checks with non-shared counting PICs (LaTTe-PIC[C])
Benchmark PIC
compress 1.790 1.693 1.711 1.728 1.769 1.790 1.693 1.711 1.728 1.769
jess 1.579 1.570 1.578 1.576 1.576 1.579 1.570 1.578 1.576 1.577
222 mpegaudio 2.161 1.358 1.267 1.267 1.269 2.161 1.358 1.267 1.267 1.269
228 jack 1.051 1.980 1.980 1.981 1.981 1.398 1.895 1.895 1.895 1.895
Table
6. Average number of type checks with shared counting PICs (LaTTe-PIC[SC])
the former eect is dominant in 227 mtrt, where the
increase in the amount of type checking inlining is almost
the same as that of decrease in address checking
inlining.
4.6 Analysis of Inlining Heuristic
Table
13, 14, and 15 show the total number of
inlined virtuals call under dierent inlining heuristics:
inlining single method for a call site and inlining all
the possible methods for a call site. The numbers in
the column of Single method inlining are the sum
of type check inlining and address check inlining in the
tables of previous section. The numbers in the column
of All method inlining are obtained by inlining all
the methods which have been encountered during initial
run and can be inlined by our inlining rule (size and
depth) 9 . As we have expected from the fact that about
90% of virtual call sites have only one target method,
there is little improvements in terms of the number of
inlined virtual calls, even though all possible methods
are permitted to be inlined. Only 213 javac has op-
portunites to be improved by inlining multiple methods
for a call site. So, we can think that inlining only
a method for a call site is su-cient for most programs.
If many call sites still remain as not inlined, this is due
to other factors like method size or inlining depth.
9 Counting version is excluded since there is no dierence from
non-counting version when all the possible methods are inlined.
And shared version is also excluded since it causes code explosion
in 213 javac.
4.7 Performance Impact of Inline Caches and
Type Feedback
Table
shows the total running time (tot) of each
program for 4 congurations of LaTTe. Translation
overhead (tr) is also included in the total running time.
Since there is little dierence in running time between
the dierent congurations of PIC and TF, only one
instance each from both are listed here. The exact
congurations are like this:
1. PIC non-shared counting PICs, maximum number
of entries = 5, reordering threshold = 100
2. TF based on non-shared counting PICs, maximum
number of entries = 10, reordering threshold =
1000, retranslation threshold
On the whole, MICs improve the performance of
LaTTe by a geometric mean of 3.0%, PICs by 9.0%,
and type feedback by 7.4%, compared with LaTTe-
VMT. As pointed out in the previous section, MICs
exhibit poor performance in 202 jess and 213 javac,
which have high ratio of type check misses. PICs, as
we have expected, solved the problem experienced by
MICs which is exposed in the above programs, without
severe degradation in other programs, and improves
the performance of almost all programs compared with
VMTs.
However, type feedback seems to be eective only
for 227 mtrt, where the number of inlined virtual calls
are much larger than other programs. The number of
Benchmark PIC
compress 1.480 1.498 1.526 1.540 1.480 1.498 1.526 1.540
jess 1.366 1.366 1.367 1.367 1.366 1.367 1.367 1.367
222 mpegaudio 1.264 1.265 1.267 1.270 1.264 1.265 1.267 1.270
228 jack 1.019 1.019 1.019 1.020 1.019 1.019 1.019 1.020
Table
7. Average number of type checks with non-shared periodic PICs (LaTTe-PIC[P])
Benchmark PIC
compress 1.693 1.711 1.728 1.769 1.693 1.711 1.728 1.769
jess 1.429 1.429
222 mpegaudio 1.265 1.266 1.267 1.269 1.265 1.266 1.267 1.269
228 jack 1.030 1.030 1.030 1.031 1.114 1.114 1.114 1.114
Table
8. Average number of type checks with shared periodic PICs (LaTTe-PIC[SP])
inlined virtual calls in other programs seems to be low
to compensate for both the retranslation overhead (in-
crease in translation time) and inlining overhead (in-
crease in code size, register pressure, and so on). Since
the performance of type feedback depends on the in-lining
heuristic as well as the retranslation framework,
both have to be carefully implemented to measure the
eect of type feedback correctly. Our implementation
could be improved in both of these points.
However, the result from 227 mtrt gives us some
expectation about the eect of type feedback. In
227 mtrt, some getter methods such as GetX, GetY,
and GetZ are very frequent, and the performance of the
benchmark is greatly improved by inlining such meth-
ods. So the more common a coding style using accessor
methods are, the more eective type feedback could be.
5 Related work
Our work is based on polymorphic inline caches and
type feedback. Polymorphic inline caches were studied
by Urs Holzle et al. [2] in the SELF compiler and
achieved a median speedup of 11% over monomorphic
inline caches. Type feedback was proposed by Urs
Holzle and David Ungar [3]. They implemented type
feedback in the SELF compiler using PICs and improved
performance by a factor of 1.7 compared with
non-feedback compiler. Since virtual calls are more frequent
in SELF, and also since the default dispatching
overhead is much larger than that of the VMTs which
can be used in Java, they achieved larger speedup than
ours. Furthermore, their measurements compare execution
time while excluding translation time overhead.
The most relevant study was done by David Detlefs
and Ole Agesen [18]. They also targetted Java, used
conditional inlining, and proposed a method test which
is identical to an address test. However, they mainly
concentrated on inlining rather than on inline caches,
and they did not use prole information to inline virtual
calls.
Gerald Aigner and Urs Holzle [19] implemented op-
timizaing source-to-source C++ compiler. They used
static prole information to inline virtual calls, and
improves the performance by a median of 18% and reduces
the number of virtual function calls by a median
factor of ve.
Karel Driesen et al. [16] extensively studied various
dynamic dispatching mechanisms on several modern
architectures. They mainly compared inline cache
mechanisms and table-based mechanisms which employ
indirect branches, and showed that the latter does
not perform well on current hardware. They also expected
that table-based approaches may not perform
well on future hardware.
Olver Zendra et al. [20] have implemented polymorphism
in the SmallEiel compiler. They also eliminated
use of VMTs by using a static variation of PICs
and inlined monomorphic call sites. However, they relied
on static type inference and did not use runtime
feedback.
Benchmark non-shared shared
compress 6 536 536 5 512 512
jess 24 2,368 2,648 14 1,708 1,988
222 mpegaudio 25 3,008 3,008
228 jack 19 1,856 1,856 12 1,380 1,380
Table
9. Size of PIC stub code
Benchmark Type-check inlining ( 1000) Address-check inlining ( 1000)
jess 12,036 12,036 12,036 12,036
Table
10. Inlined calls by type feedback: retranslation threshold
Based on the experiences of C++ programs, Brad
Caler and Dirk Grunwald [21] proposed using \if con-
version", which is similiar to type feedback except that
it uses static prole information.
6 Conclusion and Future work
We have implemented inline caches and type feed-back
in the LaTTe JIT compiler and evaluated these
techniques.
Although some programs suer from frequent cache
misses, MICs achieve a speedup of 3% by geometric
mean over VMTs. Polymorphic inline caches solve the
problem experienced by MICs without incurring overheads
elsewhere and achieve a speedup of 9% by geometric
mean over VMTs using counting PICs. We
have also tested several variations of PICs and shown
the characteristics of PICs in Java programs. Counting
PICs reduce the average number of type checks in
a PIC stub compared with a non-counting version, and
achieve an average number of type checks close to that
of a periodic version, within 0.2 for most programs.
If memory is a matter of concern, then shared PICs
can save space with only a reasonable degradation in
performance.
The eect of type feedback is not fully shown in
this study. The overall performance is even worse than
that of counting PICs. Although it is true that some
programs have little opportunity to improve in terms
of virtual calls, the result is partly because we cannot
apply optimizations selectively only when it is bene-
cial. However, the performance of 227 mtrt, which
does many virtual calls to small methods, is greatly improved
by type feedback, and gives us insight about the
performance impact of type feedback. If a coding style
which uses more abstraction and makes more calls becomes
dominant in Java programs, type feedback will
be more eective.
The study of type feedback also exposed other prob-
lems: adaptive compilation and method inlining. To
avoid degradation due to type feedback, it is very important
to estimate the costs incurred by retranslation
and inlining, and to apply conditional inlining only to
hot-spots.
--R
LaTTe: A fast and e-cient Java VM just-in-time com- piler
The Java Language Speci
Kemal Ebcio
The Java Virtual Machine Speci
Kemal Ebcio
Kemal Ebcio
Kemal Ebcio
Java Hotspot performance engine.
http://www.
Inlining of virtual methods.
Gerald Aigner and Urs H
Reducing indirect function call overhead in C
--TR | Java JIT compilation;type feedback;adaptive compilation;virtual method call;inline cache |
346607 | Symbolic Cache Analysis for Real-Time Systems. | Caches impose a major problem for predicting execution times of real-time systems since the cache behavior depends on the history of previous memory references. Too pessimistic assumptions on cache hits can obtain worst-case execution time estimates that are prohibitive for real-time systems. This paper presents a novel approach for deriving a highly accurate analytical cache hit function for C-programs at compile-time based on the assumption that no external cache interference (e.g. process dispatching or DMA activity) occurs. First, a symbolic tracefile of an instrumented C-program is generated based on symbolic evaluation, which is a static technique to determine the dynamic behavior of programs. All memory references of a program are described by symbolic expressions and recurrences and stored in chronological order in the symbolic tracefile. Second, a cache hit function for several cache architectures is computed based on a cache evaluation technique. Our approach goes beyond previous work by precisely modelling program control flow and program unknowns, modelling large classes of cache architectures, and providing very accurate cache hit predictions. Examples for the SPARC architecture are used to illustrate the accuracy and effectiveness of our symbolic cache prediction. | Introduction
Due to high-level integration and superscalar architectural designs the
computational capability of microprocessors has increased significantly
in the last few years. Unfortunately the gap between processor cycle
time and memory latency increases. In order to fully exploit the potential
of processors, the memory hierarchy must be efficiently utilized.
To guide scheduling for real-time systems, information about execution
times is required at compile-time. Modelling caches presents
a major obstacle towards predicting execution times for modern computer
architectures. Worst-case assumptions - e.g. every memory access
results in a cache miss 1 - can cause very poor execution time estimates.
c
Publishers. Printed in the Netherlands.
J. Blieberger, T. Fahringer, B. Scholz
The focus of this paper is on accurate cache behavior analysis. Note
that modelling caches is only one performance aspect that must be
considered in order to determine execution times. There are many other
performance characteristics (Blieberger, 1994; Blieberger and Lieger,
1996; Blieberger, 1997; Fahringer, 1996; Park, 1993; Healy et al., 1995)
to be analyzed which however are beyond the scope of this paper.
In this paper we introduce a novel approach for deriving a highly
accurate analytical function of the precise number of cache hits 2 implied
by a program. Our approach is based on symbolic evaluation
(cf. e.g. Fahringer and Scholz, 1997) which at compile-time collects
runtime properties (control and data flow information) of a given pro-
gram. The number of cache hits is described by symbolic expressions
and recurrences defined over the program's input data so as to maintain
the relationship between the cache cost function and the input data.
Figure
1 depicts an overview of our framework described in this
paper. The C-program is compiled which results in an instrumented C-
program. The source-code level instrumentation inserts code at those
points, where main memory data is referenced (read or written). Then,
the instrumented source-code is symbolically evaluated and a symbolic
tracefile is created. All memory references of a program are described
by symbolic expressions and recurrences which are stored in a symbolic
tracefile. Based on the cache parameters, which describe the cache
architecture, an analytical cache hit function is computed by symbolically
evaluating the symbolic tracefile. Note that our model strictly
separates machine specific cache parameters from the program model
which substantially alleviates portability of our approach to other cache
architectures and programming languages.
Performing a worst-case cache analysis according to our approach
can be divided into the following steps:
1. Build the symbolic tracefile based on the instrumented program
sources by using symbolic evaluation.
2. Compute an analytical cache hit function by symbolically evaluating
the symbolic tracefile.
3. Find a closed form expression for the cache hit function.
4. Determine a lower bound of the cache hit function in order to derive
the worst-case caching behavior of the program.
Steps 1 and 2 are treated in this paper. These steps guarantee a
precise description of the cache hits and misses.
Step 3 requires to solve recurrence relations. We have implemented
a recurrence solver which is described in (Fahringer and Scholz, 1997;
Symbolic Cache Analysis for Real-Time Systems 3
C-Compiler
Instrumentation
Executable
C-Files
Instrumented
C-File
Symbolic Evaluation
Symbolic
Tracefile
Parameters
Cache
Cache-Hit
Function
Symbolic Cache Evaluation
Figure
1. Overview of predicting cache performance
Fahringer and Scholz, 1999). The current implementation of our recurrence
solver handles recurrences of the following kind: linear recurrence
variables (incremented inside a loop by a symbolic expression
defined over constants and invariants), polynomial recurrence variables
(incremented by a linear symbolic expression defined over constants,
invariants and recurrence variables) and geometric recurrence variables
(incremented by a term which contains a recurrence variable multiplied
by an invariant). Our algorithm (Fahringer, 1998b) for computing lower
and upper bounds of symbolic expressions based on a set of constraints
is used to detect whether a recurrence variable monotonically increases
or decreases. Even if no closed form can be found for a recurrence
variable, monotonicity information may be useful, for instance, to determine
whether a pair of references can ever touch the same address. The
4 J. Blieberger, T. Fahringer, B. Scholz
current implementation of our symbolic evaluation framework models
assignments, GOTO, IF, simple I/O and array statements, loops and
procedures.
The result of Step 3 is a conservative approximation of the number of
exact cache hits and misses, i.e., the computed upper and lower bounds
are used to find a lower bound for the cache hit function. The output
form of Step 3 (suitably normalized) is a case-structure that possibly
comprises several cache hit functions. The conditions attached to the
different cases correspond to the original program structure and are
affected by the cache architecture.
In Step 4 we only have to determine the minimum of the cache hit
functions of the case-structure mentioned above. Note that it is not
necessary to determine the worst-case input data because the program
structure implies the worst-case cache behavior.
Steps 3 and 4 are described in detail in (Fahringer and Scholz, 1997;
Fahringer and Scholz, 1999; Fahringer, 1998b).
The rest of the paper is organized as follows. In Section 2 we discuss
our architecture model for caches. In Section 3 we describe symbolic
evaluation and outline a new model for analyzing arrays. Section 4 contains
the theoretical foundations of symbolic tracefiles and illustrates
a practical example. In Section 5 symbolic cache evaluation techniques
are presented for direct mapped and set associative caches. In Section
6 we provide experimental results. Although our approach will be
explained and experimentally examined based on the C-programming
language, it can be similarly applied to most other procedural languages
including Ada and Fortran. In Section 7 we compare our approach with
existing work. Finally, we conclude this paper in Section 8.
2. Caches
The rate at which the processor can execute instructions is limited by
the memory cycle time. This limitation has in fact been a significant
problem because of the persistent mismatch between processor and
main memory speeds. Caches - which are relatively small high-speed
memories - have been introduced in order to hold the contents of most
recently used data of main memory and to exploit the phenomenon of
locality of reference (see (Hennessy and Patterson, 1990)). The advantage
of a cache is to improve the average access time for data located
in main memory. The concept is illustrated in Figure 2.
The cache contains a small portion of main memory. A cache hit
occurs, when the CPU requests a memory reference that is found in
the cache. In this case the reference (memory word) is transmitted
Symbolic Cache Analysis for Real-Time Systems 5
CPU CACHE MEMORY
Transfer
Block
Transfer
Word
Figure
2. CPU, Cache and Main Memory
to the CPU. Otherwise, a cache miss occurs which causes a block of
memory (a fixed number of words) to be transferred from the main
memory to the cache. Consequently, the reference is transmitted from
the cache to the CPU. Commonly the CPU is stalled on a cache miss.
Clearly, memory references that cause a cache miss are significantly
more costly than if the reference is already in the cache.
In the past, various cache organizations (Hennessy and Patterson,
were introduced. Figure 3(a) depicts a general cache organization.
A cache consists of ns slots. Each slot can hold n cache lines and
one cache line contains a block of memory consisting of cls contiguous
bytes and a tag that holds the first address bits of the memory block.
Figure
3(b) shows how an address is divided into three fields to find
data in the cache: the block offset field used to select the desired data
from the block, the index field to select the slot and the tag field used
for comparison. Note that not all bits of the index are used if n ? 1.
A cache can be characterized by three major parameters. First, the
capacity of a cache determines the number of bytes of main memory it
may contain. Second, the line size cls gives the number of contiguous
bytes that are transferred from memory on a cache miss. Third, the
associativity determines the number of cache lines in a slot. If a block
of memory can reside in exactly one location, the cache is called direct
mapped and a cache set can only contain one cache line. If a block can
reside in any cache location, the cache is called fully associative and
there is only one slot. If a block can reside in exactly n locations and
n is the size of a cache set, the cache is called n-way set associative.
In case of fully associative or set associative caches, a memory block
has to be selected for replacement when the cache set of the memory
block is full and the processor requests further data. This is done according
to a replacement strategy (Smith, 1982). Common strategies
are LRU (Least Recently Used), LFU (Least Frequently Used), and
random.
Furthermore, there are two common cache policies with respect to
write accesses of the CPU. First, the write through caches write data
to memory and cache. Therefore, both memory and cache are in line.
Second, write back caches only update the cache line where the data
6 J. Blieberger, T. Fahringer, B. Scholz
Block Offset
Tag
Block
(a) Cache Organization
Figure
3. Cache Organization
item is stored. For write back caches the cache line is marked with a
dirty bit. When a different memory block replaces the modified cache
line, the cache updates the memory.
A write access of the CPU to an address that does not reside in the
cache is called a write miss. There are two common cache organizations
with respect to write misses. First, the write-allocate policy loads the
referenced memory block into the cache. This policy is generally used
for write back caches. Second, the no-write-allocate policy updates the
cache line only if the address is in cache. This policy is often used
for write through cache and has the advantages that memory always
contains up-to-date information and the elapsed time needed for a write
access is constant.
Symbolic Cache Analysis for Real-Time Systems 7
Caches can be further classified. A cache that holds only instructions
is called instruction cache. A cache that holds only data is called data
cache. A cache that can hold instructions and data is called a mixed or
unified cache.
Cache design has been extensively studied. Good surveys can be
found in (Alt et al., 1996; Mueller, 1997; Ottosson and Sjoedin, 1997; Li
et al., 1996; Li et al., 1995; Healy et al., 1995; Arnold et al., 1994; Nilsen
and Rygg, 1995; Liu and Lee, 1994; Hennessy and Patterson, 1990).
3. Symbolic Evaluation
Symbolic evaluation 3 (Cheatham et al., 1979; Ploedereder, 1980; Fah-
ringer and Scholz, 1997; Fahringer and Scholz, 1999) z is a constructive
description of the semantics of a program. Moreover, symbolic evaluation
is not merely an arbitrary alternative semantic description of a
program. As in the relationship between arithmetic and algebra the
specific (arithmetic) computations dictated by the program operators
are generalized and "delayed" using the appropriate formulas. The
dynamic behavior is precisely represented.
Symbolic evaluation satisfies a commutativity property.
Symbolic Evaluation
\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma! (z[[p]]; i)
parameters
to i
into result
Conventional Execution
\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma! z[[p]]i
If a program p is conventionally executed with the standard semantics
over a given input i, the result of the symbolically evaluated
program instantiated by i is the same. Clearly, symbolic evaluation
can be seen as a compiler, that translates a program into a different
language. Here, we use as a target language symbolic expressions and
recurrences to model the semantics of a program.
The semantic domain of our symbolic evaluation is a novel representation
called program context (Fahringer and Scholz, 1997; Fahringer
and Scholz, 1999). Every statement is associated with a program context
c that describes the variable values, assumptions regarding and
constraints between variable values and a path condition. The path
condition holds for a given input if the statement is executed. Formally,
a context c is defined by a triple [s; t; p] where s is a state, t a state
condition and p a path condition.
\Gamma The state s is described by a set of variable/value pairs fv
is a program variable and e i a symbolic
8 J. Blieberger, T. Fahringer, B. Scholz
expression describing the value of v i for 1 - i - n. For all program
variables v i there exists exactly one pair
\Gamma The state condition contains constraints on variable values such as
those implied by loops, variable declarations and user assertions.
Path condition is a predicate, which is true if and only if the
program statement is reached.
Note that all components of a context - including state information
are described as symbolic expressions and recurrences. An unconditional
sequence of statements ' j (1 - j - r) is symbolically evaluated
by [s The initial context [s
represents the context that holds before ' 1 and [s r the context
that holds after ' r . If ' i in the sequence
does not contain any side effects (implying a change of a variable
Furthermore, a context c = [s; t; p] is a logical assertion
c is a predicate over the set of program variables and the
program input which are free variables. If for all input values c i\Gamma1 holds
before executing the statement ' i then c i is the strongest post condition
(Dijkstra, 1976) and the program variables are in a state satisfying c i
after executing ' i .
For further technical details we refer the reader to (Fahringer and
Scholz, 1997; Fahringer and Scholz, 1999; Blieberger and Burgstal-
ler, 1998; Blieberger et al., 1999). In the following we discuss a novel
approach to evaluate arrays.
3.1. Arrays
Let a be a one-dimensional array with n (n - 1) array elements. Consider
the simple array assignment a[i]=v. The element with index i
is substituted by the value of v. Intuitively, we may think of an array
assignment being an array operation that is defined for an array. The
operation is applied to the array and changes its internal state. The
arguments of such an array operation are a value and an index of the
new assigned array element. A sequence of array assignments implies
a chain of operations. Formally, an array is represented as an element
of an array algebra A . The array algebra A is inductively defined as
follows.
1. If n is a symbolic expression then ? n 2 A .
2. If a 2 A and ff; fi are symbolic expressions then a \Phi (ff; fi) 2 A .
3. Nothing else is in A .
Symbolic Cache Analysis for Real-Time Systems 9
int a[100],x;
Figure
4. C-program fragment
In the state of a context, an array variable is associated with an
element of the array algebra A . Undefined array states are denoted by
is the size of the array and determines the number of
array elements. An array assignment is modelled by a \Phi-function. The
semantics of the \Phi-function is given by
a \Phi (ff;
represents the elements of array a and fi denotes
the index of the element with a new value ff. For the following general
array assignment
is the context before and [s the context after
statement ' i . The symbolic value of variable a before evaluating the
statement ' i is denoted by a. Furthermore, an element a in A with
at least one \Phi-function is a \Phi-chain. Every \Phi-chain can be written as
The length of a chain jaj is the number of \Phi-functions
in chain a.
The C-program fragment in Figure 4 illustrates the evaluation of
several array assignments. The context of statement ' j is represented
by c At the beginning of the program fragment the value of
variable x is a symbolic expression denoted by x. Array a is undefined
J. Blieberger, T. Fahringer, B. Scholz
(? 100 ). For all array assignment statements the state and path conditions
are set to true because the code fragment implies no branches.
Most program statements imply a change of only a single variable's
value. In order to avoid large lists of variable values in state descriptions
only those variables whose value changes after evaluation of the associated
statement are explicitly specified. For this reason we introduce
a function ffi,
which specifies a state s i whose variable binding is equal to that of
state s j except for variable v assigned a new
value e i .
Therefore, in the previous example, state s 1 is the same as state s 0
except for the symbolic value of array a.
After the last statement array a is symbolically described by a =
1). The left-most
\Phi-function relates to the first assignment of the example program -
the right-most one to the last statement.
Note that the last two statements overwrite the values of the first
two statements. Therefore, a simplified representation of a is given by
Although the equivalence of two symbolic expressions is undecidable
Haghighat and Polychronopoulos, 1996), a wide
class of equivalence relations can be solved in practice. The set of conditions
among the used variables in the context significantly improves
the evaluation of equivalence relation. A partial simplification operator
' is introduced to simplify \Phi-chains. Operator ' is defined as follows.
The partial simplification operator ' seeks for two equal expressions
in a \Phi-chain. If a pair exists, the result of ' will be the initial \Phi-
chain without the \Phi-function, which refers to the fi expression with
the smaller index i. If no pair exists, the operator returns the initial
\Phi-chain; the chain could not be simplified. Semantically, the right-most
expression relates to the latest assignment and overwrites the value
of the previous assignment with the same symbolic index.
The partial simplification operator ' reduces only one redundant
\Phi-function. In the previous example ' must be applied twice in order
Symbolic Cache Analysis for Real-Time Systems 11
to simplify the \Phi-chain. Moreover, each \Phi-function in the chain is a
potentially redundant one. Therefore, the chain is potentially simplified
in less than jaj applications of '. A partially complete simplification is
an iterative application of the partial simplification operator and it is
written as ' (a). If ' (a) is applied to a, further applying of ' will not
simplify a anymore: '('
In order to access elements of an array we need to model a symbolic
access function. Operator ae in a symbolic expressions e (described by a
\Phi-chain) reads an element with index i of an array a. If index i can be
found in the \Phi-chain, ae yields the corresponding symbolic expression
otherwise ae is the undefined value ?. In the latter case it is not possible
to determine whether the array element with index i was written. Let a
be an element of A and
l=1 (ff l ; fi l ). The operator ae is defined
as
ae
where i is the symbolic index of the array element to be found. In
general determining whether the symbolic index i matches with a \Phi-
function is undecidable. In practice a wide class of symbolic relations
can be solved by our techniques for comparing symbolic expressions
1998a). If our symbolic evaluation framework cannot prove
that the result of ae is fi l or ? then ae is not resolvable and remains
unchanged in symbolic expression e.
We present four examples in Figure 5, which are based on the value
of a at the end of the program fragment in Figure 4. For every example
we insert one of the following statements at the end of the code fragment
shown in Figure 4. For (1) x=a[x]; (2) x=a[x+1]; (3) x=a[x-1]; and
(4) x=a[y]; where y is a new variable with the symbolic value of y.
The figure shows the symbolic value of x after the inserted statement.
Note that in the first equation the element with index x is uniquely
determined. The second equation is resolved as well. In the third example
the index x \Gamma 1 does not exist in the \Phi-chain.
Therefore, the access returns the undefined symbol ?. In the last
equation we do not have enough information to determine a unique
value for array element with index i. Here, we distinguish between
several cases to cover all possibilities.
J. Blieberger, T. Fahringer, B. Scholz
1.
2.
3.
4.
Figure
5. Examples of ae
3.2. Array operations inside of loops
Modelling loops implies a problem with recurrence variables 4 . We will
use functions to model recurrences as follows: i(k
is the value of a scalar variable i at the end of iteration k + 1.
Our symbolic evaluation framework detects recurrence variables,
determines the recurrence system and finally tries to find closed forms
for recurrence variables at the loop exit by solving the recurrence sys-
tem. The recurrence system is given by the boundary conditions (initial
values for recurrence variables in the loop preheader), the recurrence
relations (implied by the assignments to the recurrence variables in the
loop body) and the recurrence condition (loop or exit condition).
We have implemented a recurrence solver (Scheibl et al., 1996) written
on top of Mathematica. The recurrence solver tries to determine
closed forms for recurrence variables based on their recurrence system
which is directly obtained from the program context. The implementation
of our recurrence solver is largely based on methods described in
(Gerlek et al., 1995; Lueker, 1980) and improved by our own techniques
(Fahringer and Scholz, 1997; Fahringer and Scholz, 1999).
Similar to scalar variables the array manipulation inside of loops
are described by recurrences. A recurrence system over A consists of a
boundary condition and a recurrence relation
(ff l (k); fi l (k))
where ff l (k) and fi l (k) are symbolic expressions and k is the recurrence
index with k - 0. Clearly, every instance of the recurrence is an element
of A . Without changing the semantics of an array recurrence, ' can
be applied to simplify the recurrence relation.
Symbolic Cache Analysis for Real-Time Systems 13
Operator ae needs to be extended for array recurrences, such that arrays
written inside of loops can be accessed, e.g. ae(a(z); i). The symbolic
expression z is the number of loop iterations determined by the loop
exit condition and i is the index of the accessed element. Furthermore,
the recurrence index k is bounded to 0 - k - z. To determine a possible
\Phi-function, where the accessed element is written, a potential index set
l (i) of the l-th \Phi-function is computed.
l (i) contains all possible fi l (k), z equal to the index i. If an
index set has more than one element, the array element i is written in
different loop iterations by the l-th \Phi-function. Only the last iteration
that writes array element i is of interest. Consequently, we choose the
element with the greatest index. The supremum x l (i) of an index set
l (i) is the greatest index such that
Finally, we define operator ae as follows.
ae (a(z);
ae
The maximum of the supremum indices x l (i) determines the symbolic
value ff l (x l (i)). If no supremum index exists, ae returns the access to
the value before the loop.
The example code of the program in Figure 6 shows how to symbolically
evaluate an array access. The recurrence of i(k) is resolved in
state s 3 of ' 3 . Due to the missing information about a the recurrence
of array a is not resolvable but our symbolic evaluation still models the
dynamic behavior of the example code.
4. Symbolic Tracefile
Tracing is the method of generating a sequence of instruction and data
references encountered during program execution. The trace data is
commonly stored in a tracefile and analyzed at a later point in time.
For tracing, instrumentation is needed to insert code at those points
in a program, where memory addresses are referenced. The tracefile is
created as a side-effect of execution. Tracing requires a careful analysis
of the program to ensure that the instrumentation correctly reflects the
data or code references of a program. Moreover, the instrumentation
14 J. Blieberger, T. Fahringer, B. Scholz
int
char a[100],s=0;
Figure
6. C-program fragment
can be done at the source-code level or machine code level. For our
framework we need a source-code level instrumentation. In the past a
variety of different cache profilers were introduced, e. g. MTOOL (Gold-
berg and Hennessy, 1991), PFC-Sim (Callahan et al., 1990), CPROF
(Lebeck and Wood, 1994).
The novelty of our approach is to compute the trace data symbolically
at compile-time without executing the program. A symbolic
tracefile is a constructive description for all possible memory references
in chronological order. It is represented as symbolic expressions and
recurrences.
In the following we discuss the instrumentation of the program in
Figure
6. The SPARC assembler code is listed in Figure 7. The first
part of the code is a loop preparation phase. In this portion of code
the contents of variable n is loaded into a work register. Additionally,
the address of a is built up in register %g2. Inside the loop, the storage
location of n is not referenced anymore and there are four read accesses
s, a[i], a[i], a[i+1] and two write accesses s, a[i]. Furthermore,
the variable i is held in a register. Based on this information we can
instrument the example program. In Figure 8 the instrumented program
is shown where function r ref(r,nb) denotes a read reference of
address r with the length of nb bytes. For a write reference the function
w ref() is used.
Symbolic Cache Analysis for Real-Time Systems 15
ld [%o3+%lo(n)],%g5; read &n
mov 0,%o1
add %g5,-1,%g5
cmp %o1,%g5
bge .LL3
or %g2,%lo(a),%g2
add %g2,1,%o4
ldub [%o2+%lo(s)],%g2; read &s
ldub [%o0],%g3; read &a[i]
add %g2,%g3,%g2
stb %g2,[%o2+%lo(s)]; write &s
ldub [%o0],%g2; read &a[i]
ldub [%o1+%o4],%g3; read &a[i+1]
add %g2,%g3,%g2
stb %g2,[%o0]; write &[i]
cmp %o1,%g5
bl .LL5
add %o0,1,%o0
retl
Figure
7. SPARC code of example in Figure 6
A symbolic tracefile is created by using a chain algebra. The references
are stored as a chain. A symbolic trace file t 2 T is inductively
defined as follows.
1.
2. If t 2 T and r and nb are symbolic expressions then t \Phi oe(r; nb) 2 T.
3. If t 2 T and r and nb are symbolic expressions then t \Phi -(r; nb) 2 T.
4. Nothing else is in T.
Semantically, function oe is a write reference to the memory with symbolic
address r whereby the number of referenced bytes is denoted by
nb. We have similar semantics for read references -, where r is the
address and nb is the number of referenced bytes.
For instance, a 32-bit bus between the cache and CPU can only
transfer a word references with 4 bytes. Therefore, a double data item
J. Blieberger, T. Fahringer, B. Scholz
(comprises 8 bytes) -(r; 8) must be loaded in two consecutive steps by
4). For a word reference we do not need the number of
referenced bytes anymore because it is constant. In the example above
it is legal to rewrite -(r; 8) as -(r)\Phi-(r+4). This notation is extensively
used in the examples of Section 5.
For loops we need recurrences
where - l (k) is a read or write reference (-(r l (k); nb) or -(r l (k); nb)).
Symbolic evaluation is used to automatically generate the symbolic
tracefile of a C-program. Instead of symbolically evaluating instrumentation
calls we associate w ref and r ref with specific semantics. A
pseudo variable t 2 T is added to the program. A read reference
r ref(r,nb) is translated to t \Phi -(r; nb), where t is the state of the
pseudo variable t before evaluating the instrumentation. The same is
done for write references except that - is replaced by oe.
Let us consider the example in Figure 8. Before entering the loop, t
needs to log reference r ref(&n,4). Therefore, t is equal to ?\Phi-(&n;
where &n denotes the address of variable n. Inside the loop a recurrence
is used to describe t symbolically. The boundary condition t(0) is equal
to ? \Phi -(&n; 4) and reflects the state before the loop. The recurrence
relation is given by
Note that an alternative notation of &a[k] is a a is the start
address of array a. Finally, the last value of k in the recurrence t(k) is
which is determined by the loop condition.
For the symbolic tracefile only small portions of the final program
context are needed. Therefore, we extract the necessary parts from the
final context to describe the symbolic tracefile. Here, the state condition
and symbolic value t are of relevance. For example in Figure 8 the
symbolic tracefile is given by
Symbolic Cache Analysis for Real-Time Systems 17
int
char a[100],s=0;
Figure
8. C-program fragment with symbolic tracefile
The length of the symbolic tracefile corresponds to the number of
read/write references. If either the number of reads or the number
of writes are of interest we selectively count elements (- and oe). For
instance the number of read references is jtj
the number of write references is jtj 1), and the overall
number of memory references is given by
5. Symbolic Evaluation of Caches
A symbolic tracefile of a program describes all memory references (is-
sued by the CPU) in chronological order. Based on the symbolic trace-
J. Blieberger, T. Fahringer, B. Scholz
file we can derive an analytical function over the input, which computes
the number of hits. The symbolic tracefile contains all information
to obtain the hit function. Moreover, the symbolic cache analysis is
decoupled from the original program. Thus, our approach can be used
to tailor the cache organization due to the needs of a given application.
In the following we introduce two formalisms to compute a hit function
for direct mapped and set associative data caches. To symbolically simulate
the cache hardware, hit sets are introduced. Hit sets symbolically
describe which addresses are held in the cache and keep track of the
number of hits.
5.1. Direct Mapped Caches
Direct mapped caches are the easiest cache organization to analyze.
For each item of data there is exactly one location in a direct mapped
cache where it can be placed 5 and the cache contains ns cache lines. The
size of a cache line, cls , determines the amount of data that is moved
between main memory and the cache. In the following we introduce the
cache evaluation of direct mapped caches with write through
and no-write-allocate policy. Compare Section 2.
A new cache evaluation operator fi is defined to derive a hit set
for a given tracefile t, where a hit set is a h). The
first component of H is a symbolic cache, which is element of A -
the second component represents the number of cache hits and is a
Symbolic cache C of a hit set H has ns elements and each element
corresponds to a cache line of the cache. More formally, the algebraic
operation C \Phi(r; fi) loads the memory block with start address r into the
cache whereby fi is the index of the cache line. Note that when the CPU
issues address r, the start address r of the corresponding memory block
must be selected to describe the reference. Moreover, a cache placement
function - maps a reference to an index of cache C such that the load
operation of reference r is written as C \Phi (r; -(r)). In the following we
assume that function - is a modulo operation (ns cls).
5.1.1. Definition of the cache evaluation operator
ns ; 0) denotes the initial hit set,
the final hit set, and t the tracefile. The final hit set
H f is the analytical description of the number of cache hits and the
final state of the cache. In the following we describe the operator fi
inductively.
First, for an empty tracefile ? the hit set is
Symbolic Cache Analysis for Real-Time Systems 19
Second, if a write reference is the first reference in the tracefile, it does
not change the hit set at all and is to be removed.
(2)
where - l is either a read reference -(r l ) or write reference oe(r l ). Third,
for read references a new hit set must be computed
and
C; otherwise
Increment d is 1 if reference r is in the cache. Otherwise, d is zero and
the reference r must be loaded. For loading data item with address r
into the cache, C' is assigned the new symbolic value C \Phi (r; -(r)).
In order to symbolically describe the conditional behavior of caches
(data item is in the cache or not), we introduce a fl-function (see
(Fahringer and Scholz, 1997)).
where semantically
equivalent to (c-x
c is a conditional expression and :c the negation of c. Moreover, x i
are symbolic expressions.
Based on the definition of fl (6) we can aggregate formulas given
in (3),(4), and (5). Depending on condition ae(C; either the
number of cache hits h 0 is incremented by one or the symbolic cache is
assigned a new symbolic value C
J. Blieberger, T. Fahringer, B. Scholz
Similar to tracefiles, hit sets are written as a pair. The first component
of the pair symbolically describes the hit set. The second component
contains constraints on variable values such as conditionals and
recurrences stemming from loops.
Furthermore, for recursively-defined tracefiles we need to generalize
hit sets to hit set recurrences. Let t(k
l (k) be
the tracefile recurrence relation and H the initial hit set, the hit set
recurrence is expressed by
5.1.2. Example
For the sake of demonstration, we study our example of Figure 6 with
a cache size of 4 cache lines and each cache line comprises one byte.
The cache placement function -(r) is r mod 4. It maps the memory
addresses to slots of the cache. Moreover, all references are already
transformed to word references and references &n, &s, and &a[0] are
aligned to the first cache line. Note that in our example a word reference
can only transfer one byte from the CPU to the cache and vice versa.
The initial hit set is H 0). Based on the symbolic tracefile
given in (1) the hit set recurrence is to be derived. First of all we apply
operator fi to the hit set recurrence according to (8).
The final hit set is given by H
the highest index k of the recurrence and is determined by the loop
condition. In the following we evaluate the boundary condition of the
hit set recurrence. We successively apply the evaluation rule (7) of
operator fi to the initial hit set (? 4 ; 0).
Symbolic Cache Analysis for Real-Time Systems 21
Note that condition ae(C; of rule (7) is false for all read
references in the boundary condition. After evaluating the boundary
condition there is still no cache hit and the cache is fully loaded with
the contents of variable n. In the next step we analyze the loop iteration.
We continue to apply operator fi to the recurrence relation.
where C k and h k denote symbolic cache and number of hits in the kth
iteration of the hit set recurrence. The global variable s is mapped to
the first cache line. If the first slot of the cache contains the address
of s then a cache hit occurs and the number of hits is incremented,
otherwise the new element is loaded and the number of hits remains
the same. We further apply operator fi and obtain
In the next step we eliminate the write reference oe(&s) according to
rule (2) and further apply operator fi to -(&a[k]).
22 J. Blieberger, T. Fahringer, B. Scholz
Here, we can simplify the fl-function. The contents of symbolic cache
k at k mod 4 is k because the reference &a[k] is loaded from the step
before the previous one. Note that the write reference oe(&s) does not
destroy the reference &a[k]. In the last step the references -(&a[k
and oe(&a[k]) are evaluated. We continue with
The third fl-function can be reduced since element k+1 has never been
written before because condition ae(C 00
The hit set recurrence is still conditional. Further investigations are
necessary to derive a closed form for the number of hits. We know that
the number of cache lines is four. We consider all four modulo classes
of index k which for the given example results in an unconditional
recurrence.
0: The condition ae(C k ; &s of the first fl-function is
false since ae(C k ; 0) can be rewritten as k, if k ? 1 or ? otherwise.
The condition of second fl-function ae(C 0
is false as
well because the cache line has been loaded with the reference &s
before. For the case k mod the hit set recurrence is reduced
to an unconditional recurrence.
In the first fl-function the condition ae(C k ;
can never be true because in the previous step of the recurrence the
Symbolic Cache Analysis for Real-Time Systems 23
cache line 1 has been loaded with the contents of &a[k \Gamma 1]. Fur-
thermore, the element &a[k] has been fetched in the previous step
and, therefore, the condition of the second fl-function evaluates to
true and the hit set recurrence can be written as
3: For both cases the conditions of the
fl-functions are true. The load reference &s does not interfere with
&a[k] and &a[k 1]. The recurrence is given by
Now, we can extract the number of hits from hit sets (9), (10), (11).
The modulo classes can be rewritten such that k is replaced by 4i and
the modulo class.
The boundary conditions stem from the number of hits of H(0). The
recurrence is linear and after resolving it, we obtain
The index z of the final hit set H determined by z =
1). The analytical cache hit function h z , given by (12), can
be approximated by 9
In the example above the conditional recurrence collapsed to an
unconditional one. In general, we can obtain closed forms only for
specific - although very important - classes of conditional recurrences.
If recurrences cannot be resolved, we employ approximation techniques
as described in (Fahringer, 1998a).
J. Blieberger, T. Fahringer, B. Scholz1n-1
Figure
9. An n-way Set Associative Cache
5.2. Set Associative and Fully Associative Caches
In this section we investigate n-way set associative write through data
caches with write-allocate policy and least recently used replacement
(LRU) strategy. The organization of set-associative is more complex
than direct mapped data caches due to placing a memory block to n
possible locations in a slot (compare Section 2).
Similar to direct mapped caches we define a cache evaluation operator
fi to derive a hit set for a given tracefile t. For set associative
caches a hit set is a tuple
cache, h the number of hits, and - max is a symbolic counter that is
incremented for every read or write reference. Note that the symbolic
counter is needed to keep track of the least recently used reference
of a slot. Figure 9 illustrates the symbolic representation of C for set
associative caches. C is an array of ns slots. Each slot, denoted as S(')
ns \Gamma 1, can hold n cache lines. Array C and slots S(')
are elements of array algebra A .
More formally, algebraic operation S \Phi ((r; -); fi) loads the memory
block with start address r into set S whereby fi is the index (0 - fi ! n)
and - the current symbolic value of - max . Reading value r from S is
denoted by ae r (S; fi) while reading the time stamp is written ae - (S; fi).
A whole set is loaded into cache C via C \Phi (S; '). Note that when
the CPU issues address r, the start address r of the corresponding
memory block must be selected to describe the reference. Similar to
direct mapped caches, a cache placement function - maps a memory
reference to slot S such that the load operation of reference r is written
as C \Phi (ae(C; -(r)) \Phi ((r; -(r))) where -(r) is a function determining
Symbolic Cache Analysis for Real-Time Systems 25
the index of slot S according to the LRU strategy and is defined by
there exists a ' such that ae(S;
Note that the first case determines if there is a spare location in slot
S. If so, the first spare location is determined by -. The second case
computes the least recently used cache line of slot S.
5.2.1. Definition of the cache evaluation operator
ns ; 0; 0) denotes the initial hit set,
the final hit set, and t the tracefile. The final hit set
H f is the analytical description of the number of cache hits and the
final state of the cache. In the following we describe the operator fi
inductively.
First, for an empty tracefile ? the hit set is
Second, if a read or write operation -(r) is the first memory reference
in the tracefile, a new hit set is deduced as follows
d. The symbolic counter - max is incremented by one.
Furthermore, the slot of reference r is determined by
and increment d is given by
0; otherwise.
If there exists an element in slot S, which is equal to r, a cache hit
occurs and increment reference r must be updated with a
new time stamp.
where
Function -(r) looks up the index, where reference r is stored in slot
-(r) can be described by a recurrence. If d = 0, a cache miss occurs
and the reference is loaded into the cache
26 J. Blieberger, T. Fahringer, B. Scholz
where
We can aggregate formulas (13) - (18) with fl-functions. Depending on
condition
new element is updated with a
new time stamp or loaded into the cache.
Note that fl functions are nested in formula (19). A nested fl is recursively
expanded (compare (6)) such that the expanded boolean expression
is added to the corresponding true or false term of the higher-level
fl-function. Furthermore, for recursively-described tracefiles we need to
generalize hit sets to hit set recurrences (compare (8)).
5.2.2. Example
We symbolically evaluate the example of Figure 6 with a 2-way set
associative cache and two slots and a cache line size of one byte. For
this cache organization a word reference can transfer one byte from
the CPU to the cache and vice versa. Thus, the cache size is the same
as in Section 5.1, only the cache organization has changed. The cache
placement function -(r) is r mod 2. We assume that the references of
the symbolic tracefile are already transformed to word references and
references &n, &s, and &a[0] are aligned to the first slot.
The initial hit set is H 0). Based on the tracefile given in
(1) the hit set recurrence is to be derived. Similar to example in Section
5.1 we apply operator fi to the hit set recurrence according to (8).
For all read references in the boundary no cache hit occurred. The
cache is loaded with the contents of variable n and the number of cache
Symbolic Cache Analysis for Real-Time Systems 27
hits is zero. In the next step we evaluate the recurrence relation. We
continue to apply operator fi according to rule (19).
where C k , h k , and - k denote symbolic cache, number of hits and time
stamp counter of the kth iteration of the hit set recurrence. In order
to keep the description of hit set recurrences as small as possible we
rewrite the outer fl-function of (20) as P . We further apply operator fi
and obtain
In the next step we evaluate write reference oe(&s) and get
28 J. Blieberger, T. Fahringer, B. Scholz
Here, we can simplify the fl-function because variable s has been read
within the current iteration of the loop without being overwritten in the
cache, the condition of the outer fl-function evaluates to true. Hence,
we obtain
Similar to the previous step we can reduce both fl-functions.
Read reference &a[k produces a cache miss. Thus, the next step
can be simplified too.
Symbolic Cache Analysis for Real-Time Systems 29
In the last step the fi operator is applied to write reference &s. It is a
cache hit and we can eliminate the fl functions.
Arguments similar to those in Section 5.1 show that the conditions of
the outer fl-function in P 0 and P 00 are true for k - 1 and false for
Therefore, we can derive an unconditional recurrence relation for the
number of cache hits (k - 1).
3:
A closed form solution is given by
2:
The index z of the final hit set H determined by z =
1). Thus, the analytical cache hit function is
which shows that for our example the set associative cache performs
better than the direct mapped cache of the same size.
6. Experimental Results
In order to assess the effectiveness of our cache hit prediction we have
chosen a set of C-programs as a benchmark. We have adopted the
evaluation framework introduced in (Fahringer and Scholz,
1997; Fahringer and Scholz, 1999) for the programming language C
and the cache evaluation. The instrumentation was done by hand although
an existing tool such as CPROF (Lebeck and Wood, 1994)
could have instrumented the benchmark suite. Our symbolic evaluation
framework computed the symbolic tracefiles and symbolically evaluated
data caches. In order to compare predictions against real values we
J. Blieberger, T. Fahringer, B. Scholz
int n; char a[100];
void sum()
int
Figure
10. Benchmark Program
have measured the cache hits for a given cache and problem size. For
measuring the empirical data we used the ACS cache simulator(Hunt,
1997). The programs of the benchmark suite were amended by the
instrumentation routines of a provided library bin2pdt. The generated
tracefiles were stored as PDATS (Johnson and Ha, 1994) files and later
read by the ACS cache simulator.
The first program of the benchmark suite is example program in
Figure
10. In contrast to Section 5 we have analyzed a direct mapped
data cache with a cache line size greater than one byte. Furthermore,
the first byte of array a is aligned to the first byte of an arbitrary
cache line and the cache has more than one cache line. Our framework
computes a cache hit function, where the number of cache hits is determined
by
cls
\Upsilon
and cls is the cache line size of 4, 8 and
bytes. Intuitively, we get 2(n \Gamma 1) potential cache hits. For every new
cache line a miss is implied. Therefore, we have to subtract the number
of touched cache lines
cls
\Upsilon from the number of read references.
Table
Ia describes problem sizes n (n - first column), number of
read (R-Ref. - second column) and write (W-Ref. - third column) ref-
erences, and sum of read and write references (T-Ref. - fourth column).
Tables
measured with predicted cache hits for various
data cache configurations (capacity/cache line size). For instance, D-Cache
256/4 corresponds to a cache with 256 bytes and a cache line size
of 4 bytes. Every table comprises four columns. M-Miss tabulates the
measured cache misses, M-Hits the measured cache hits, and P-Hits
the predicted cache hits. In accordance with our accurate symbolic
cache analysis we observe that the predicted hits are identical with the
associated measurements for all cache configurations considered.
The same benchmark program was taken to derive the analytical
cache hit function for set associative data caches. Note that the result
is the same as for direct mapped caches. Even the empirical study
with two way data caches of the same capacity delivered the same
measurements given in Tables Ib - Id.
Symbolic Cache Analysis for Real-Time Systems 31
Table
ProblemSize
R-Ref. W-Ref. T-Ref.
28
1000 1999 999 2998
10000 19999 9999 29998
Table
Ib. D-Cache256/4
M-Miss M-Hit P-Hit
100 26 173 173
1000 251 1748 1748
10000 2501 17498 17498
Table
Ic. D-Cache16K/8
M-Miss M-Hit P-Hit
1000 126 1873 1873
10000 1251 18748 18748
Table
M-Miss M-Hit P-Hit
J. Blieberger, T. Fahringer, B. Scholz
Table
IIa. Experiment of mcnt
Problem Size
n\Thetam R-Ref. W-Ref. T-Ref.
Table
IIb. D-Cache 64K/16
n\Thetam M-Miss M-Hit P-Hit
100\Theta100 5000 5000 5000
100\Theta200 100000 100000 100000
The second program mcnt of the benchmark suite counts the number
of negative elements of an n \Theta m-matrix. The counter is held in
a register and does not interfere with the memory references of the
matrix. Again, we analyzed the program with three different direct
mapped cache configurations 256/4, 16K/8 and 64K/16. For the data
cache sizes 256/4 and 16K/8 the cache hit function is zero. This is due
float f[N][N], u[N][N], new[N][N];
void jacobi-relaxation()
int i,j;
Figure
11. Jacobi Relaxation
Symbolic Cache Analysis for Real-Time Systems 33
Table
IIIa. Experiment of Jacobi Relaxation
Problem Size
n\Thetan R-Ref. W-Ref. T-Ref.
90\Theta90 48020 9604 288120
Table
IIIb. D-Cache 256/4
n\Thetan M-Miss M-Hit P-Hit
Table
IIIc. D-Cache 512/4
n\Thetan M-Miss M-Hit P-Hit
50\Theta50 7102 4418 4418
90\Theta90 47672 348 348
Table
IIId. D-Cache 1K/4
n\Thetan M-Miss M-Hit P-Hit
50\Theta50 7102 4418 4418
90\Theta90
34 J. Blieberger, T. Fahringer, B. Scholz
to the usage of double elements of the matrix. Only for the 64K/16
configuration the program can benefit from a data cache and the cache
hits are given by
\Sigma n\Deltam\Upsilon
. In
Tables
IIa and IIb the analytical function is
compared to the measured results. Similar to the first benchmark the
cache hit function remains the same for set associative data caches with
the same capacity and the measurements are identical to Table IIb.
The third program jacobi relaxation in Figure 11 calculates the
Jacobi relaxation of an n \Theta n float matrix. In a doubly nested loop
the value of the resulting matrix new is computed. Both loop variables
are held in registers. Therefore, for direct mapped data caches
interference can only occur between the read references of arrays f and
u. We investigated the Jacobi relaxation code with a cache configuration
of 256/4, 512/4 and 1K/4. The number of cache hits is given by
ns
ns
ns
ns
ns - n - ns
according to Section 4 where ns is the number of cache lines. We compared
the measured cache hits with the values of the cache hit function.
The results of our experiments are shown in Tables IIIa - IIId.
The fourth program gauss jordan in Figure 12 is a linear equation
solver. Note that this program contains an if-statement inside the loop.
Variables i, ip, j, and k are held in registers. For direct mapped
data caches interference can only occur between the read references
of array a. We have analyzed the Gauss Jordan algorithm with a cache
configuration of 256/4.
We could classify three different ranges of n where the behavior of
the hit function varies.
C(n) must be described for each n in the range. Furthermore, P (n) is a
function containing 64 cases. Note that the number 64 stems from the
number of cache lines. For sake of demonstration we only enlist some
Symbolic Cache Analysis for Real-Time Systems 35
float a[N,N];
void gauss-jordan(void)
int i,ip,j,k;
if (i != j)-
(a[j,i] * a[i,k]) * a[i,i];
Figure
12. Gauss Jordan
cases.
In
Tables
IVa and IVb we compare the measured results with function
values of the hit function.
The ability to determine accurate number of cache hits depends on
the complexity of the input programs. The quality of our techniques
to resolve recurrences, analyse complex array subscript expressions,
loop bounds, branching conditions, interprocedural effects, and pointer
operations impacts the accuracy of our cache hit function. For instance,
if closed forms cannot be computed for recurrences, then we introduce
approximations such as symbolic upper and lower bounds (Fahringer,
1998a). We have provided a detailed analysis of codes that can be
handled by our symbolic evaluation in (Fahringer and Scholz, 1999).
36 J. Blieberger, T. Fahringer, B. Scholz
Table
IVa. Experiment of Gauss Jordan
Problem Size
n\Thetan R-Ref. W-Ref. T-Ref.
200\Theta200 15999600 3999900 19999500
2000\Theta2000 15999996000 3999999000 19999995000
Table
IVb. D-Cache 256/4
D-Cache 256/4
n\Thetan M-Hit P-Hit
200\Theta200 7060901 7060901
2000\Theta2000 5825464317 5825464317
7. Related Work
Traditionally, the analysis of cache behavior for worst-case execution
time estimates in real-time systems (Park, 1993; Puschner and Koza,
1989; Chapman et al., 1996) was far too complex. Recent research (Ar-
nold et al., 1994) has proposed methods to estimate tighter bounds for
WCET in systems with caches. Most of the work has been successfully
applied to instruction caches (Liu and Lee, 1994) and pipelined architectures
(Healy et al., 1995). Lim et al. (1994) extend the original timing
schemas, introduced by Puschner and Koza (1989), to handle pipelined
architectures and cached architectures. Nearly all of these methods rely
on frequency annotations of statements. If the programmer provides
wrong annotations, the quality of the prediction can be doubtful. Our
approach does not need user (programmer) interaction since it derives
all necessary information from the program's code 6 and it does not
restrict program structure such as (Ghosh et al., 1997).
A major component of the framework described in (Arnold et al.,
1994) is a static cache simulator (Mueller, 1997) realized as a data flow
analysis framework. In (Alt et al., 1996) an alternate formalization
which relies on the technique of abstract interpretation is presented.
Both of these approaches are based on data-flow analysis but do not
Symbolic Cache Analysis for Real-Time Systems 37
properly model control flow. Among others, they cannot deal with dead
paths and zero-trip loops all of which are carefully considered by our
evaluation framework (Fahringer and Scholz, 1997; Blieberger,
1997).
Implicit path enumeration (IPET) (Li et al., 1995; Li et al., 1996) allows
to express semantic dependencies as constraints on the control flow
graph by using integer linear programming models, where frequency
annotations are still required. Additionally, the problem of IPET is
that it only counts the number of hits and misses and cannot keep
track of the history of cache behavior. Only little work has been done to
introduce history variables (Ottosson and Sjoedin, 1997). While IPET
can model if-statements correctly (provided the programmer supplies
correct frequency annotations), it lacks adequate handling of loops. Our
tracefiles exactly describe the data and control flow behavior
of programs which among others enables precise modeling of loops.
In (Theiling and Ferdinand, 1998) IPET was enriched with information
of the abstract interpretation described in (Alt et al., 1996).
A graph coloring approach is used in (Rawat, 1993) to estimate
the number of cache misses for real-time programs. The approach only
supports data caches with random replacement strategy 7 . It employs
standard data-flow analysis and requires compiler support for placing
variables in memory according to the results of the presented algorithm.
Alleviating assumptions about loops and cache performance improving
transformations such as loop unrolling make their analysis less precise
than our approach. It is assumed that every memory reference that is
accessed inside of a loop at a specific loop iteration causes a cache
miss. Their analysis does not consider that a reference might have
been transmitted to the cache due to a cache miss in a previous loop
iteration.
Much research has been done to predict cache behavior in order to
support performance oriented program development. Most of these approaches
are based on estimating cache misses for loop nests. Ferrante
et al. (1991) compute an upper bound for the number of cache lines
accessed in a sequential program which allows them to guide various
code optimizations. They determine upper bounds of cache misses for
array references in innermost loops, the inner two loops, and so on. The
number of cache misses of the innermost loop that causes the cache to
overflow is multiplied by the product of the number of iterations of
the overflow loop and all its containing loops. Their approximation
technique may entail polynomial evaluations and suffers by a limited
control flow modeling (unknown loop bounds, branches, etc.
Lam et al. (1991) developed another cache cost function based on
the number of loops carrying cache reuse which can either be temporal
38 J. Blieberger, T. Fahringer, B. Scholz
(relating to the same data element) or spatial (relating to data elements
in the same cache line). They employ a reuse vector space in combination
with localized iteration space. Cross interference (elements from
different arrays displace each other from the cache) and self interferences
(interference between elements of the same array) are modeled.
Loop bounds are not considered even if they are known constants.
Temam et al. (1994) examine the source code of numerical codes for
cache misses induced by loop nests.
Fahringer (1996; 1997) implemented an analytical model that estimates
the cache behavior for sequential and data parallel Fortran
programs based on a classification of array references, control flow
modeling (loop bounds, branches, etc. are modeled by profiling), and
an analytical cache cost function.
Our approach goes beyond existing work by correctly modeling control
flow of a program even in the presence of program unknowns and
branches such as if-statements inside of loops. We cover larger classes
of programming languages and cache architectures, in particular data
caches, instruction caches and unified caches including direct mapped
caches, set associative, and fully associative caches. We can handle most
important cache replacement and write policies. Our approach accurately
computes cache hits, whereas most other methods are restricted
to approximations.
Closed form expressions and conservative approximations can be
found according to the steps described in Section 1.
Symbolic evaluation can also be used for WCET analysis without
caching (Blieberger, 1997), thereby solving the dead paths problem
of program path analysis (Park, 1993; Altenbernd, 1996). In addi-
tion, it can be used for performing "standard" compiler optimizations,
thus being an optimal framework for integrating optimizing compilers
and WCET analysis (compare Engblom et al. (1998) for a different
approach).
8. Conclusion and Future Work
In this paper we have described a novel approach for estimating cache
hits as implied by programs written in most procedural languages (in-
cluding C, Ada, and Fortran). We generate a symbolic tracefile for the
input program based on symbolic evaluation which is a static technique
to determine the dynamic behavior of programs. Symbolic expressions
and recurrences are used to describe all memory references in a program
which are then stored chronologically in the symbolic tracefile. A cache
Symbolic Cache Analysis for Real-Time Systems 39
hit function for several cache architectures is computed based on a
cache evaluation technique.
In the following we describe the contributions of our While
most other research targets upper bounds for cache misses, we focus
on deriving the accurate number of cache hits. We can automatically
determine an analytical cache hit function at compile-time without
user interaction. Symbolic evaluation enables us to represent the cache
hits as a function over program unknowns (e.g. input data). Our approach
allows a comparison of various cache organizations for a given
program with respect to cache performance. We can easily port our
techniques across different architectures by strictly separating machine
specific parameters (e.g. cache line sizes, replacement strategies, etc.)
from machine-independent parameters (e.g. loop bounds, array index
expressions, etc. A novel approach has been introduced to model
arrays as part of symbolic evaluation which maintains the history of
previous array references.
We have shown experiments that demonstrate the effectiveness of
our approach. The predicted cache behavior for our example codes
perfectly match with the measured data.
Although we have applied our techniques to direct mapped data
caches with write through and no write-allocate policy and set associative
data caches with write through and write-allocate policy, it
is possible to generalize our approach for other cache organizations
as well. Moreover, our approach is also applicable for instruction and
unified caches.
In addition our work can be extended to analyze virtual memory
architectures. A combined analysis of caching and pipelining via symbolic
evaluation will be conducted in the near future (compare Healy
et al. (1999) for a different approach).
The quality of our cache hit function depends on the complexity
(e.g. recurrences, interprocedural effects, pointers, etc.) of the input
programs. If, for instance, we cannot find closed forms for recurrences,
then we employ approximations such as upper bounds. We are currently
extending our symbolic evaluation techniques to handle larger
classes of input programs. Additionally, we are building a source-code
level instrumentation system for the SPARC processor architecture.
We investigate the applicability of our techniques for multi-level data
and instruction caches. Finally, we are in the process to conduct more
experiments with larger codes.
J. Blieberger, T. Fahringer, B. Scholz
Notes
1 A cache miss occurs if referenced data is not in cache and needs to be loaded
from main memory.
2 A cache hit occurs if referenced data is in cache.
3 Symbolic evaluation is not to be confused with symbolic execution (see e.g. (King,
1976)).
4 All variables which are written inside a loop - including the loop variable - are
called recurrence variables.
5 A slot consists of one cache line. See Section 2.
6 Clearly our approach cannot bypass undecidability.
7 Random replacement seems very questionable for real-time applications because
of its indeterministic behavior.
--R
Analyzing and Visualizing Performance of Memory Hierachies
A discipline of programming.
Kluwer Academic Publishers.
Computer Architecture - A Quantitative Approach
His research interests include areas of analysis of algorithms and data structures
He studied Computer Science at the TU Vienna and received his doctoral degree in
Readers may contact Johann Blieberger at the Department of Computer-Aided Automation
Thomas Fahringer Thomas Fahringer received a Masters degree in
Readers may contact Fahringer at the Institute for Software Technology and Parallel Systems
Bernhard Scholz Bernhard Scholz is going to enrol a position as an Assistant Professor of Computer Science at the Dept.
Readers may contact Scholz at the Dept.
Figure 13.
Technical University Vienna
--TR
--CTR
Berhard Scholz , Johann Blieberger , Thomas Fahringer, Symbolic pointer analysis for detecting memory leaks, ACM SIGPLAN Notices, v.34 n.11, p.104-113, Nov. 1999
Thomas Fahringer , Bernhard Scholz, A Unified Symbolic Evaluation Framework for Parallelizing Compilers, IEEE Transactions on Parallel and Distributed Systems, v.11 n.11, p.1105-1125, November 2000
Johann Blieberger, Data-Flow Frameworks for Worst-Case Execution Time Analysis, Real-Time Systems, v.22 n.3, p.183-227, May 2002
B. B. Fraguela , R. Doallo , J. Tourio , E. L. Zapata, A compiler tool to predict memory hierarchy performance of scientific codes, Parallel Computing, v.30 n.2, p.225-248, February 2004 | worst-case execution time;symbolic evaluation;cache hit prediction;static analysis |
346866 | Analytical comparison of local and end-to-end error recovery in reactive routing protocols for mobile ad hoc networks. | In this paper we investigate the effect of local error recovery vs. end-to-end error recovery in reactive protocols. For this purpose, we analyze and compare the performance of two protocols: the Dynamic Source Routing protocol (DSR[2]), which does end-to-end error recovery when a route fails and the Witness Aided Routing protocol (WAR[1]), which uses local correction mechanisms to recover from route failures. We show that the performance of DSR degrades extremely fast as the route length increases (that is, DSR is not scalable), while WAR maintains both low latency and low resource consumption regardless of the route length. | Introduction
Routing protocols for ad hoc networks can be classified in two
general categories: reactive and proactive, depending on their
reaction to changes in the network topology. Proactive proto-
cols, such as distance-vector protocols (DSDV[4]), are highly
sensitive to topology changes. They require mobile hosts to periodically
exchange information in order to maintain an accurate
image of the network. While convergence is faster in such
protocols, the cost in wireless bandwidth required to maintain
routing information can be prohibitive. Moreover, mobile
hosts are engaged in route construction and maintenance even
when they do not need to communicate, which means that a
percentage of the collected routing information may never be
used. Therefore, many researchers have proposed to use reactive
protocols (like WAR[1], DSR[2], SSA[6], AODV[3] etc.),
which only trigger route construction or update based on the
mobility and needs of mobile hosts.
There have been several simulation studies which have attempted
to characterize various performance aspects of existing
routing protocols. Boppana et al. [9] compared reactive-
style protocols with proactive protocols by looking at their performance
under different network loads. They found that re-active
protocols do not perform as well as proactive ones under
heavy network loads. On the other hand, simulation studies
conducted by Borch et. al. [10] on four different protocols
(DSR [2], AODV [3], DSDV [4] and TORA [5]) indicate that
reactive routing protocols may outperform the proactive routing
protocols. Their work focused on packet delivery ratio,
routing overhead and path optimality. In particular, they suggested
that DSR tends to be superior under most scenarios experimented
with. However, it is unclear from their results how
DSR scales as the network size (route length) increases.
Along with simulation experiments, a few attempts have been
made to evaluate the performance of routing protocols using
mathematical modeling. Jacquet and Laouiti [11] did a preliminary
analytical comparison between reactive and proactive
protocols, using a random graph model. Although this model
limits the network size to indoor or short range outdoor net-
works, it provides useful insights in the performance of reactive
protocols, particularly about the impact of route non-optimality
and/or symmetry.
In this paper we focus on the performance of reactive proto-
cols. More precisely, we analyze and compare the error recovery
techniques used by existing reactive protocols. Unlike protocols
from the proactive family, reactive protocols are more
likely to experience route errors because of their more conservative
approach in collecting topology information. In general,
such errors may have several causes. First, radio links are inherently
sensitive to noise and transmission power fluctuations.
This, as well as problems like the hidden terminal, can cause
temporary or permanent disruption in service at the wireless
link level, in one direction or in both. Second, host mobility can
further increase link instability, reducing the probability that a
packet is successfully transmitted over a link. The effect of link
instability is magnified as route length increases, which makes
the way routing protocols cope with route errors a critical issue.
There are two general ways to deal with route errors: local and
end-to-end error recovery. We present an analytical comparison
between the performance of reactive protocols which use
local error recovery and reactive protocols which use end-to-
error recovery. The goal is to determine which error recovery
mechanism is suitable at a given mobility rate in the mobile
ad hoc network and to quantify its performance in terms of average
packet latency and cost of packet delivery (in terms of
bandwidth consumed) as a function of parameters such as route
length, size of the network, mobility rate and packet arrival
rate. In particular, we analyze the performance of WAR (which
employs local error recovery) and DSR (which uses end-to-end
error recovery) and compare them using two metrics: the probability
that a packet is delivered to its destination in one attempt
and the traffic generated (data plus control packets) to successfully
route a packet (T routing ). Our analysis shows that,
unless some local error recovery technique is employed to deal
with failures along the route to destination, the performance of
reactive protocols is not scalable with the size of the network
(in terms of route length).
The rest of the paper is organized as follows. Section 2 presents
a qualitative comparison between end-to-end and local error re-
covery. This section also provides a brief description of the
recovery techniques used in DSR and WAR. In Section 3 we
develop the analytical tools needed to characterize the performance
of WAR and DSR. Numerical results are discussed in
Section 4 and conclusions based on these results are outlined
in Section 5.
Recovery
When a packet encounters a link error, the routing protocol has
three choices:
1. report the error to the sender of the packet immediately
(negative acknowledgment)
2. do nothing (the sender will timeout waiting for a positive
3. invoke some localized correction mechanism to attempt
to bypass the link in error
A protocol which implements one of the first two options uses
end-to-end error recovery, whereas a protocol which implements
the third option uses local error recovery. Among the
existing reactive protocols (WAR, DSR, AODV, TORA, ABR,
SSR), only WAR, ABR and TORA 1 include a local error recovery
mechanism. The others use either negative acknowledgments
or timeouts to detect errors and recover them by having
the original host resend the packet. Our attention will focus
on two related protocols: WAR, which uses local error recovery
and DSR, which uses end-to-end error recovery. The next
section gives a brief description of the two protocols and emphasizes
differences and similarities between them.
recovery technique is based on link reversal and therefore
it is not applicable in networks with unidirectional links
2.1 WAR vs. DSR
WAR and DSR are members of the same family of protocols
(reactive) and they use source routing to forward packets from
one host to another. While their route construction mechanism
is generally the same, WAR implements a different route selection
and maintenance scheme. Both protocols allow mobile
hosts to operate in promiscuous receive mode, but with different
goals: in DSR, packet snooping is done for route maintenance
purposes, while WAR uses snooping to help the routing
process. However, two essential aspects distinguish WAR and
DSR and have a major impact on their performance: routing in
the presence of unidirectional links and error handling.
2.1.1 Routing in the presence of unidirectional links
Although DSR is claimed ([2] and [10]) to be able to handle
unidirectional links, its ability is limited to only computing
routes which avoid such links. That is, DSR routing will fail if
such links appear for short periods of time after the routes are
computed. Transient events in the network are very likely to
cause certain links to temporarily appear as non-operational (in
one direction or in both), in which case DSR will fail to route
packets and will instead spend time and network resources to
re-discover what might be only temporarily out of order routes.
On the other hand, WAR uses witness hosts [1] to overcome
such transient problems, which greatly reduces the overall
packet delivery time and the network traffic generated by (ex-
pensive!) route discovery messages. Witness hosts of a given
host X are essentially routers which act on X's behalf when they
detect that a packet sent out by X did not appear to reach its
target. An illustration of how witnesses participate in the routing
process is shown in Fig.1. Both W1 and W2 hear X's transmission
to Y, which makes them potential active witnesses of
X with respect to the packet P (X!Y ) sent to Y. At this point,
they will wait to see if Y attempts to deliver the packet to Z,
which would mean that Y received it from X. If that is the case,
their role with respect to the packet P (X!Y ) reduces to sending
an acknowledgment to X (to avoid an error in case X could
not hear Y's transmission to Z). If neither W 1 nor W 2 hear Y's
transmission to Z, they conclude that the packet P (X!Y ) failed
to reach Y. In this case, they will both attempt to deliver the
packet directly to Z, although, indirectly, they target Y as well.
Since W1 and W2 do not necessarily have a way to communicate
with each other and avoid contention, they will ask Z for
arbitration before sending the packet. If Z rejects their request,
it means that it has already received the packet from Y and their
role reduces to sending the acknowledgment to X. Otherwise,
the one selected by Z will deliver the packet and then inform X
about it.
2.1.2 handling
When a route problem is detected and no alternative route is immediately
available, DSR sends a negative acknowledgment to
Y
Z
Figure
1.
and W 2
help transmitting
the packet from X to Y or Z
the original sender of the packet. This method has two draw-
backs. First, when link errors occur far away from the sender
and close to the destination, the fact that the packet succeeded
in traversing a long path is not exploited. This increases the
overall packet delivery time and the network resources used by
the routing protocol. Second, negative acknowledgments tend
to add to the network overhead precisely when the network is
overloaded (i.e. in case of congestion).
WAR uses a localized error recovery mechanism to correct the
problem without involving the original sender in this process.
The operation of the recovery mechanism is illustrated in Fig.
2. Initially, host X looks up an alternate route to the destination
in its own route cache. If one is available, X uses it to forward
the packet. Otherwise, it broadcasts a copy of the original mes-
sage, with the tag changed to Rrecovery , to its neighbors. After
sending out the recovery message, X will drop the original
message, and its role in the routing process ends. No acknowledgment
is necessary for recovered messages. As soon as one
of the hosts in the remaining route (indicated in the message
header) is reached, the message tag is changed back into
and it continues its travel as a normal data packet.W
Y
Z
Figure
2. If witnesses W 1
and W 2
can't contact
Y or Z, host X initiates the route recovery
protocol
The number of steps a message can travel as a route recovery
message (with a Rrecovery tag) is indicated in the constraint
field attached to the original data packet by the sender (the Recovery
Depth value). When the Recovery Depth counter becomes
zero (being decremented by each host which receives
the Rrecovery message), the message is no longer propagated
and the recovery fails (on that branch of the network). This
way, WAR also provides a framework for setting message pri-
orities. A greater Recovery Depth will cause the recovery protocol
to be more insistent, increasing the chances of success,
whereas a low Recovery Depth will cause the packet to be
dropped if no fast recovery is possible.
3 Analysis of Reactive Protocols
This section focuses on the development of analytical tools
which will be used to study the behavior of WAR.We first introduce
basic notions related to the network (parameters, assump-
tions, etc.), then we examine a few general results related to the
class of reactive protocols, and finally we analyze the performance
of WAR and compare it with the performance of DSR.
3.1 Preliminaries
Let X and Y be two arbitrary hosts in the network. If X and Y
are within transmission range of each other (that is, Y may hear
packets sent by X and X may hear packets sent by Y), then we
say that there is a link between X and Y, and we denote it by
(X,Y). Further, if the link (X,Y) has existed during the interval
of time if at time t host X needs to transmit a packet
to Y, the link can be in one of the following states:
ffl broken: if Y is no longer in the transmission range of X.
ffl non-operational: if X and Y are still within range but
cannot hear each other (due to noise, etc.
ffl unidirectional: if one and only one of the following conditions
is
a) Y can hear X (the link is direct operational), or
b) X can hear Y (the link is reverse operational).
ffl bidirectional: if Y can hear X and X can hear Y.
failure: A link (X,Y) fails when host Y does not
receive (directly from X or from a witness host) a packet sent
to it by X. Note that by this definition, a direct operational link
cannot fail.
Route failure: A route fX1 ; fails if a
packet sent by X 1 to X k does not reach X k (that is, the packet
needs to be resent by X 1 ).
3.2 Assumptions
In order to simplify the analysis, we make the following assumptions
about the network and about the routing protocol(s)
discussed:
i. The average route length between any two hosts is a
known value EL . Determination of EL is a research topic
in itself and it is out of the scope of this paper.
Name Description
- packet arrival rate (communication frequency)
location change arrival rate (move frequency)
n number of mobile hosts in the network
A total area of the network (in square meters)
r transmission range of a mobile host (in meters)
U probability that a (non-broken) link is
non-operational in at least one direction
EL expected route length between any two hosts
Table
1. System parameters
ii. The time between calls (packets) between hosts is exponentially
distributed, with mean 1=-.
iii. The time between location changes for each host is also
exponentially distributed, with mean 1=-. Note that - can
be 0, in which case the network is static.
iv. The probability that a link is not bidirectional is a known
parameter, p U . A non-bidirectional link is equally likely
to be non-operational in either direction.
v. The locations of mobile hosts are uniformly distributed
within the network area.
vi. All mobile hosts have the same transmission range, r.
vii. Mobile hosts store only one route to a given destination
(that is, although a protocol may allow multiple routes to
a given destination to be cached at a mobile host, a second
route will not be available in case the primary route fails).
Table
1 displays the parameters which are assumed to be known
about the system. Also, we will be using the notation N for
the random variable representing the number of neighbors for
a mobile host and EN for the expected value of N .
3.3 General Results
This section provides the necessary tools for the analysis of
WAR, without getting into protocol dependent details.
Lemma 1 The probability that a particular mobile host Y is in
the vicinity of a host X is:
ae
A
oe
Proof: The coverage zone of X has an area of -r 2 , while the
total number of places where Y can be is A, which proves the
result.
Theorem 2 The average number of neighbors for a mobile
host, E[N ], is
Proof: Using p0 from Lemma 1, it follows that the probability
of any k neighbors being in the transmission range of X is
Thus, the expected value of N
(defined in Section 3.2) is E[N
the theorem. In what follows, we will use
Theorem 3 The probability that a link is broken when a packet
needs to be transmitted is:
Proof: By definition, a link (S; R) is broken when host R is no
longer in the transmission range of S. In other words, a broken
link is detected when a transmission from S happens after
R moved out of the transmission range of S. Let T and M be
random variables which describe the waiting time until a transmission
occurs and the waiting time until a move occurs, re-
spectively. We assumed (in Section 3.2) that T and M have
exponential distributions with parameters - (the transmission
frequency) and - (the location change frequency), respectively.
Thus, the probability that a move occurs before a transmission
in the interval [t; t + ffit] is given by:
Z t+ffit
Hence, the unconditional probability that a transmission from
S to R occurs after R moved out of the transmission range of
S is
which leads to:
Remark 1 Note that p B is not the probability that a link is broken
at all times, but the probability that the link is not available
when a packet needs to be transmitted. Also, the probability p U
is only defined when the link is not broken. Thus, the probability
that a link is bidirectional when a packet needs to be transmitted
is not
Lemma 4 The probability that a non-broken link is operational
in one direction is
Proof: Let p dir (p rev ) be the probability that a link (X; Y ) is
direct (reverse) operational. We have:
(from the assumptions in Section 3.2), we
get that:
Corollary 4.1 The probability that a non-broken link is non-operational
in both directions is 2
Proof: This probability is given by
3.3.1 Cost analysis
Lemma 5 If p L is the probability that a packet is successfully
transmitted over a link, then the average number of links successfully
passed by a packet along a route before an error occurs
is:
Proof: The number of links passed to encounter an error (in-
cluding the link in error) is a geometric r.v. Q, with distribution
expected value
Hence, the average number of links successfully
passed before an error occurs is:
For WAR, p L is given by Theorem 9.
Lemma 6 If p S is the probability that a packet is successfully
routed to its final destination, then the average number of routing
failures for a given packet is
Proof: Let Z be a r.v. which describes the number of routing
attempts needed to successfully deliver a packet to its final des-
tination. Z has a geometric distribution given by P
and the expected value
that the average number of routing failures before a success is:
For WAR, p S is given by Theorem 11.
Theorem 7 If C LS is the cost of a successful link transmission
and C LF the cost of handling an error at link level, then the
average cost of routing a packet to its final destination in a re-active
protocol is:
Proof: In computing the overall cost of routing a packet from
source to destination, we have to account for possible routing
errors. Thus, the cost of routing a packet is the sum between
the cost generated by (possible) failures and the cost of the final
(error free) attempt:
where z is the expected number of failures (Lemma 6).
The cost of a route failure, C RF is determined by the cost of
partially routing the packet up to the link in error and the cost
of informing the sender about the error (error handling):
where q is the number of links successfully passed (Lemma 5).
Finally, the cost of error free routing, C RS is determined by the
route length EL and the cost of successful link transmission,
Thus, we have:
and using Lemma 6 and Lemma 5 the result is straightforward.
are protocol dependent val-
ues. We will discuss each of these values for WAR in Section 3.4
and for DSR in Section 4. On the other hand, C LS and C LF
are generic values; they may represent different quantities, depending
on the purpose of the analysis (i.e. time, amount of
traffic, etc.)
3.4 Analysis of WAR
Considering that packets may be dropped, delayed and/or re-sent
several times before they are successfully delivered to the
destination, we are interested in an estimate of the following
quantities for WAR:
1. the probability p S that a packet is routed successfully in
one attempt (without being resent), and
2. the total amount of traffic T routing generated to successfully
route a packet from source to destination.
We will first determine the following probabilities, which will
help in the derivation of our target results:
probability that at least a witness host can bypass
a problematic link
probability that link transmission succeeds
R probability that a link failure is recovered from
probability that a packet arrives at its final
destination without being resent (route success)
Lemma 8 The probability that at least one of the EN neighbors
of X can deliver a packet on behalf of X is given by:
\Theta
Proof: Let Y be the direct receiver of the packet sent by X and
let W be a witness of X. In order for W to be able to pass the
packet to Y, the links (X,W) and (W,Y) must not be broken and
must be bidirectional. That is, the probability that W can pass
the packet to Y and then inform X is
Let H be a discrete r.v. representing the number of witness
hosts which are able to help the packet from X to Y. H has a
binomial distribution, given by:
which means that the probability that at least one witness can
help a packet from X to reach Y is:
Theorem 9 The probability that a link transmission succeeds
in WAR is:
Proof: A direct transmission from host X to host Y succeeds
in WAR if Y receives the packet either from X or from a witness
of X and then X is informed about the success. That is, the
transmission succeeds if:
1. the link is not broken and it is bidirectional, or
2. the link is not broken, it is unidirectional and a witness
delivers the packet, or
3. the link is broken but a witness delivers the packet.
Theorem is the recovery depth for a given packet, then
the probability that a route a recovery succeeds in WAR is:
C A
Proof: Let X be the host which detects the route problem and
let k be the number of hops the packet still needs to travel.
At the first step of the recovery, X sends the recovery packet
to all its neighbors. The probability that none of the remaining
k hosts on the route is in the neighborhood of X is p F 0
At step two, the probability that none of remaining hosts along
the route is among those queried is
EN neighbors of host X attempt forward the recovery request).
Similarly, at step i we have p F . Therefore, the
probability that the recovery does not succeed at all is:
from which we get
Let P be a r.v. representing the position along the route where
the error occurs. Since all links are equally likely to present
problems, P has a uniform distribution with mean EL =2. Thus,
we can substitute in the above equation, which completes
the proof.
Theorem 11 If the average route length in the network is EL ,
the probability that a packet is successfully routed by WAR to
its final destination in one attempt is:
Proof: A packet arrives at its destination in one attempt if it
successfully passes all the links along the route without being
resent by the original host. That is, it either passes each link
without error, or, if an error occurs, the recovery mechanism
corrects it. Therefore, we have:
3.4.1 Traffic generated to successfully route a data
packet
generated to successfully route a data packet
\Psi, we understand the total amount of data and control packets
sent over the wireless medium from the moment \Psi is sent by the
source host until the moment it is received by the destination
host.
We first analyze the traffic needed to deliver a packet over a
link.
Table
2 summarizes the notations used to distinguish between
various types of traffic.
Name Description
T DATA the size of a data packet.
T ACK the size of an explicit ACK packet.
T RTS the size of an RTS (request to send) packet.
T CTS the size of a CTS (clear to send) packet.
Table
2. Traffic associated with data and
control packets
The traffic generated to deliver a data packet over the direct link
between host X and host Y is:
where Twitness is the traffic generated by the witness hosts to
assist the delivery of a data packet from X to Y.
Using Eq.8, we derive:
with T no help and T help explained below.
If the data packet sent by X arrives at Y without any help from
the witness hosts (with probability
U , from
Eq.8), then (the worst case) traffic generated by witnesses is
given by:
since all witnesses which are aware of the success will attempt
to send an ACK to X.
If the packet needs help from the witness hosts (with probability
U , from Eq.8) to reach Y (or the next host
along the route, say Z), then the traffic generated if the k th witness
succeeds is:
Hence, the traffic generated by witnesses to help a packet is:
where
is the probability that witness i succeeds in
delivering the packet on behalf of X. The value of p W i
can be
approximated by p W =EN (see Eq.7), which leads to:
3.4.2 Traffic generated to recover from a link error
The traffic generated to recover from a link error depends on
the depth of the recovery (which is controlled by the Recovery
Depth field within the message). The recovery process is a local
broadcast, for up to Recovery Depth steps. Thus:
Even though in most cases Recovery Depth should be 2 or 3, we
can assume that an upper bound on the number of steps needed
to reach every host in the network is dn=EN e (n is defined in
Table
1). Hence, an upper bound on Trecovery is:
Trecovery - T
Performance Comparison WAR vs. DSR
We are interested in two values: the probability that a packet
is routed without errors (that is, first attempt succeeds) and the
traffic required to successfully deliver a packet (considering
that multiple attempts may be necessary).
4.1 Probability of error-free routing
As shown in Theorem 9, the probability of link success
for WAR is p for DSR, this
probability is
Further, the probability that a packet is successfully routed to
its destination without being resent in WAR is p
(Eq. 12). For DSR, this probability equals:
Figure
3 shows a comparison between the performance of
WAR and DSR in terms of the probability that a packet arrives
at its final destination from the first attempt. DSR's performance
degrades as the route length increases and the probability
of successful delivery from the first attempt drops to almost
zero for route lengths greater than 5. On the other hand,
for recovery depths greater than 1, WAR exhibits a different
behavior: it improves its performance as the route length in-
creases. The probability that WAR delivers a packet from the
first attempt approaches 1 for routes longer than 5 as long as
the recovery depth is 3. This suggests that WAR gains significant
performance even with a very small extra-cost (i.e. a very
large, expensive recovery depth does not seem to be necessary).
4.2 Traffic generated to successfully route a
data packet
Using eq. 6 and the values computed in equations 14, 21, 8 and
12 , we have that the traffic generated by WAR to successfully
route a packet is:
link
Trecovery
For DSR, we have:
with p L;DSR and p S;DSR from eq. 22 and eq. 23 respectively.
Although WAR generates more traffic at link level in order to
pass a data packet between neighboring hosts, the overall traffic
generated to successfully route the packet to its final destination
is several orders of magnitude smaller than the one generated
by DSR. Figure 4 shows a comparison between WAR
and DSR for route lengths between 1 and 10. While still close
to the traffic generated by WAR for small routes (one to five
hops), DSR's traffic grows extremely fast as we increase the
route length, because the probability of encountering problems
is higher for longer routes, and DSR does nothing to reduce it.
It is clear from Fig. 4 that WAR maintains low traffic (and implicitly
bandwidth consumption) for all route lengths as long as
the recovery depth is greater than 1.
Probability of success from the first attempt in WAR1020 1350.250.75Route length Recovery depth
Probability of success
Probability of success from the first attempt in WAR
1350.250.75Route length Recovery depth
Probability of success
Probability of success from the first attempt in DSR
1350.250.75Route length Recovery depth
Probability of success
Figure
3. Probability of packet delivery
success.
Conclusions
Our analysis, exemplified on WAR and DSR, shows that unless
some local error recovery technique is employed, reactive
protocols might not be suitable for large ad hoc networks. We
have experimented with various network sizes and parameters,
and the results exhibited similar trends regardless of how we
varied these parameters. As network size (and implicitly route
length) increases, the performance of protocols that use end-to-
recovery (like DSR in our study) degrades rapidly, and the
amount of network resources consumed per packet routed increases
equally fast. On the other hand, although local recovery
requires more network resources at link level, the overall performance
and resource consumption is only slightly affected by
the increase in the route length. The results we obtained analytically
indicate that WAR, which uses local error correction at
two levels (first by involving witness hosts in the routing process
and secondly by implementing an error recovery scheme
to cope with cases when witnesses cannot help) provides a scalable
routing solution for wireless ad hoc networks.
--R
A Witness-Aided Routing Protocol for Mobile Ad-Hoc Networks with Unidirectional Links
Dynamic source routing in ad hoc wireless networks.
Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile comput- ers
A highly adaptive distributed routing algorithm for mobile wireless networks.
Signal stability based adaptive routing (SSA) for ad-hoc mobile networks
The Cambridge Ad-Hoc Mobile Routing Protocol
A review of current routing protocols for ad hoc mobile wireless networks.
An analysis of routing techniques for mobile ad hoc networks.
A performance comparison of multi-hop wireless ad hoc network routing protocols
--TR
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
A performance comparison of multi-hop wireless ad hoc network routing protocols
Wireless ATM and Ad-Hoc Networks
A Witness-Aided Routing Protocol for Mobile Ad-Hoc Networks with Unidirectional Links
Ad-hoc On-Demand Distance Vector Routing
A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks
--CTR
Giovanni Resta , Paolo Santi, An analysis of the node spatial distribution of the random waypoint mobility model for ad hoc networks, Proceedings of the second ACM international workshop on Principles of mobile computing, October 30-31, 2002, Toulouse, France
Douglas M. Blough , Giovanni Resta , Paolo Santi, A statistical analysis of the long-run node spatial distribution in mobile ad hoc networks, Proceedings of the 5th ACM international workshop on Modeling analysis and simulation of wireless and mobile systems, September 28-28, 2002, Atlanta, Georgia, USA
Douglas M. Blough , Giovanni Resta , Paolo Santi, A statistical analysis of the long-run node spatial distribution in mobile ad hoc networks, Wireless Networks, v.10 n.5, p.543-554, September 2004
Christian Bettstetter , Giovanni Resta , Paolo Santi, The Node Distribution of the Random Waypoint Mobility Model for Wireless Ad Hoc Networks, IEEE Transactions on Mobile Computing, v.2 n.3, p.257-269, March | mobile ad hoc network;performance analysis;routing protocol |
347309 | On the Perturbation Theory for Unitary Eigenvalue Problems. | Some aspects of the perturbation theory for eigenvalues of unitary matrices are considered. Making use of the close relation between unitary and Hermitian eigenvalue problems a Courant--Fischer-type theorem for unitary matrices is derived and an inclusion theorem analogous to the Kahan theorem for Hermitian matrices is presented. Implications for the special case of unitary Hessenberg matrices are discussed. | Introduction
. New numerical methods to compute eigenvalues of unitary
matrices have been developed during the last ten years. Unitary QR-type methods
[19, 9], a divide-and-conquer method [20, 21], a bisection method [10], and some special
methods for the real orthogonal eigenvalue problem [1, 2] have been presented.
Interest in this task arose from problems in signal processing [11, 29, 33], in Gaussian
quadrature on the unit circle [18], and in trigonometric approximations [31, 16] which
can be stated as eigenvalue problems for unitary matrices, often in Hessenberg form.
As those numerical methods exploit the rich mathematical structure of unitary ma-
trices, which is closely analogous to the structure of Hermitian matrices, the methods
are efficient and deliver very good approximations to the desired eigenvalues.
There exist, however, only a few perturbation results for the unitary eigenvalue
problem, which can be used to derive error bounds for the computed eigenvalue ap-
proximations. A thorough and complete treatment of the perturbation aspects associated
with the numerical methods for unitary eigenvalue problems is still missing.
The following perturbation results have been obtained so far. If U and e
U are
unitary matrices with spectra
respectively, we can
arrange the eigenvalues in diagonal matrices and e
, respectively, and consider as a
measure for the distance of the spectra
d (oe(U); oe( e
U
where the minimum is taken over all permutation matrices P and the norm is either
the spectral or the Frobenius norm. By the Hoffman-Wielandt theorem (see, e.g.,[34])
we get
dF (oe(U); oe( e
U
U
Bhatia and Davis [5] proved the corresponding result for the spectral norm
U
Schmidt, Vogel & Partner Consult, Gesellschaft fur Organisation und Managementberatung
mbH, Gadderbaumerstr. 19, 33602 Bielefeld
y Universitat Bremen, Fachbereich 3 - Mathematik und Informatik, 28334 Bremen, Germany,
email: angelika@math.uni-bremen.de
z Universitat Bremen, Fachbereich 3 - Mathematik und Informatik, 28334 Bremen, Germany,
email: heike@math.uni-bremen.de
Elsner and He consider a relative error in [15]. They use the measure
e
d (oe(U); oe( e
U
where again They prove that
e
d (oe(U); oe( e
U
U)jj
is the Cayley transformation of U (assuming here
that
To each eigenvalue of U , where \Gamma1 62 oe(U ), we can associate an angle ' by
defining
with \Gamma=2 ' ! =2. It is the angle formed by the line from -1 through and the
real axis (see also Section 2). With respect to their angles the eigenvalues of U and
e
U have a natural ordering on the unit circle. Elsner and He give sine- and tangent-
interpretations of the above inequality in terms of these angles. Furthermore they
show that with respect to a certain cutting point i on the unit circle the eigenvalues
of U and e
U have a natural ordering f j (i)g and f e j (i)g on the unit circle such that
An interlacing theorem for unitary matrices is also presented in [15], showing that the
eigenvalues of suitably modified principal submatrices of a unitary matrix interlace
those of the complete matrix on the unit circle (see Section 2).
In this paper we consider further aspects of the perturbation problem for the
eigenvalues of a unitary matrix U . In Section 2 we show how the angles i are related
to the eigenvalues of the Cayley transform of U . With the aid of this relation we
can give a min-max-characterization for the angles of U 's eigenvalues in analogy to
the Courant-Fischer theorem for Hermitian matrices. We also show that tangents of
these angles can be characterized by usual Rayleigh quotients corresponding to the
generalized eigenvalue problem
Furthermore we prove a Kahan-like inclusion theorem showing that the eigenvalues of
a certain modified leading principal submatrix of U determine arcs on the unit circle
such that each arc contains an eigenvalue of U . In applications unitary matrices are
often of Hessenberg form. In Section 3 we recall that a unitary unreduced Hessenberg
matrix H has a unique parameterization reflection
parameters
H completely. We show the implications of the results in Section 2 for the special
case of unitary Hessenberg matrices. In particular it will be seen that the modified
kth leading principal submatrix in this special case is just H(fl
We discuss the dependence of the eigenvalues on this last reflection parameter
i. Finally Section 4 will give numerical examples which elucidate the statements
proved in Section 3.
2. Perturbation Results for unitary Matrices. Unitary matrices have a rich
mathematical structure that is closely analogous to that of Hermitian matrices. In
this section we first discuss the intimate relationship between unitary and Hermitian
matrices which indicates that one can hope to find unitary analogues for the good numerical
methods and for the theoretical results that exist for the symmetric/Hermitian
eigenvalue problem. We will adapt some eigenvalue bounds for Hermitian matrices to
the unitary case.
Let ae be a complex unimodular number. The Cayley transformation with respect
to ae maps the unitary matrices whose spectrum does not include ae, onto the Hermitian
matrices. The Cayley transformation with respect to ae for a unitary matrix U 2 C n\Thetan
is defined as
where ae is not an eigenvalue of U and
\Gamma1. I n denotes the n \Theta n identity matrix.
A simple calculation shows that C(U) is Hermitian. The mapping is one-to-one and
the inverse Cayley transformation with respect to ae for a Hermitian matrix X is given
by
The symmetric/Hermitian eigenproblem has been extensively studied, see, e.g. [28,
17, 24, 26]. Due to this relation between Hermitian and unitary matrices, one can
hope to get similar results for unitary matrices.
With the aid of the Cayley transformation we can order the eigenvalues con-
veniently. Let be the eigenvalues of U numbered starting at ae moving
counterclockwise along the unit circle. Let 1 be the eigenvalues of
For simplicity assume that ae = \Gamma1. Then
Cnf0g the argument of z, arg(z) 2 (\Gamma; ] is defined
by
arctan( Im(z)
arctan( Im(z)
The Cayley transformation of k is the tangent of the angle '
is formed by the real axis and the straight line through k and \Gamma1:
l k
Hence it is reasonable to define
This also gives a complete ordering of the points on the unit circle with respect to
the cutting point \Gamma1. Note that the complete ordering excludes the cutting point \Gamma1.
For a different cutting point the orders of the eigenvalues are only changed cyclically.
are complex unimodular numbers such that
denote the open arc from the point i 1 to the point i 2 on the unit circle (moving
counterclockwise).
The Courant-Fischer theorem (see, e.g., [17, Theorem 8.1.2]) characterizes the
eigenvalues of Hermitian matrices by Rayleigh quotients. A similar characterization
can be given for the eigenvalues of unitary matrices. Let U 2 C n\Thetan be a unitary
matrix with eigenvalues . Assume that \Gamma1 is not an eigenvalue of U and
number the eigenvalues starting at \Gamma1 moving counterclockwise along the unit circle.
vn be an orthonormal basis in C n\Thetan of eigenvectors of U . Let z 2 C n with
1. Then we can expand z as
From
Because of
we see that the Rayleigh quotient z H Uz lies in the convex polygon which is spanned
by the eigenvalues of U .
l 1
l 2
l 3
l 4
l 5
z H Uz
Theorem 2.1. With the notation given above we obtain for
min
denotes the set of all k-dimensional subspaces of C n .
In particular,
Proof. Let be an orthonormal basis of eigenvectors of U . Let V 2
Vn\Gammak+1 . Then z be a vector in this intersection, jjzjj
Then
Hence, the Rayleigh quotient z H Uz lies in the convex polygon spanned by the eigen-values
Therefore
and
min
Now consider the subspace of dimension which is spanned by v
vector z; jjzjj this subspace can be written as
Hence, the Rayleigh quotient z H Uz lies in the convex polygon spanned by the eigen-values
Therefore
and
min
The second equation can be shown analogously.
Corollary 2.2. With the notation given above we define for z 2 C n with
Then
and for
min
Proof. A simple calculation yields (2.2). The rest of the corollary follows from
Theorem 2.1 and the monotonicity of the function tan in (\Gamma
The corollary shows that the angles ' k can be characterized by usual Rayleigh
quotients. R(z) can be interpreted as the Rayleigh quotient corresponding to the
generalized eigenvalue problem
Since U is unitary, -(I n +U H )(I n \Gamma U) is Hermitian and (I n +U H )(I n +U) is Hermitian
and positive definite. (2.3) is equivalent to the eigenvalue problem
for the Cayley transformation of U .
Remark 2.3. For ease of notation the above theorem and corollary are formulated
for the case that ae = \Gamma1 is not an eigenvalue of U . This restriction is not necessary,
one can proof the corresponding statements for any cutting point ae 2 C; not
an eigenvalue of U .
In [15], the Cauchy interlacing theorem for Hermitian matrices is generalized to
the unitary case. The Cauchy interlace theorem shows that the eigenvalues of the k \Theta k
leading principal submatrix of a Hermitian matrix X interlace the eigenvalues of X .
Adapting this theorem to the unitary case, one has to deal with the problem that
leading principal submatrices of unitary matrices are in general not unitary and that
their eigenvalues lie inside the unit circle. In [15] it is shown that certain modified
leading principal submatrices of a unitary matrix U have the property that their
eigenvalues interlace with those of U .
Theorem 2.4. [15, Theorem 5.2 and 5.3] Let
U 11 U 12
U
be an n \Theta n unitary matrix, U 11 the k \Theta k leading principal submatrix of U , and
ae is not an eigenvalue of U . Then U k is unitary. Let
be the eigenvalues of U and
be those of U k ordered with respect to ae. Then
U k is called the modified kth leading principal submatrix of U . Furthermore,
analogues of the Hoffman-Wielandt theorem and a Weyl-type theorem are derived in
[15].
With the help of Theorem 2.4 one can specify for each eigenvalue of a modified
leading principal submatrix U k an arc on the unit circle which contains that eigenvalue.
These bounds are fairly rough, especially if k is much smaller than n. This result is of
theoretical nature, because in practice we are more interested in the question, whether
the arc contains an eigenvalue of U or not.
The same problem arises in the Hermitian case. Lehmann and Kahan derived
inclusion theorems which consider this problem (see, e.g., [28] and the references
therein). A special case of their results is
Theorem 2.5. [28, Theorem on page 196] Let X 2 C n\Thetan be a Hermitian matrix.
Partition X as
Let i be the eigenvalues of X
then each interval [ contains an eigenvalue of X, where
The following theorem states the analogous result for unitary matrices.
Theorem 2.6. Let U 2 C n\Thetan be a unitary matrix and let U be partitioned as
in (2.4). Let ae 2 C; be not an eigenvalue of U . Define a unitary matrix
U k as in (2.5) with eigenvalues . The eigenvalues are numbered starting
at ae moving counterclockwise along the unit circle. If rank(U 21 then each arc
on the unit circle, contains at least one eigenvalue of U , where
Proof. Since ae is not an eigenvalue of U , it is not an eigenvalue of U 22 : Assume
ae is an eigenvalue of U 22 . Then there is a normalized eigenvector x 2 C n\Gammak such that
U 22
U
x
U 22 x
aex
As and U is unitary, U 12 x has to be zero. But this would imply that
ae is an eigenvalue of U in contradiction to our general assumption. Hence ae is not an
eigenvalue of U 22 .
U k is defined as
can be interpreted as the Schur complement of U 22 \Gamma aeI n\Gammak in U \Gamma aeI n .
We can make use of this fact to construct (U \Gamma aeI using the following result of
Duncan [13] (see Corollary 2.4 in [27])
Let A 2 C n\Thetan be partioned as
Let A and H be nonsingular, then is the
Schur complement of H in A, T is nonsingular and
\GammaH
We obtain
In particular, (U is the k \Theta k leading principal submatrix of (U \Gamma aeI
Now we consider the Cayley transformation with respect to ae of U
This yields
We partition X as we did U :
is the k \Theta k leading principal submatrix of X . From (2.6) it follows that
Therefore, X k is the Cayley transformation of U k . Further we obtain from (2.6):
If rank(U 21 as the other two matrices in the product have
full rank.
Now we can use Theorem 2.5 to obtain that each interval formed by two eigen-values
of X k contains at least one eigenvalue of X . We have seen in (2.1) that the
eigenvalues of X and X k can be obtained from those of U and U k via the Cayley trans-
formation. As the Cayley transformation is monotone, this yields: each arc
on the unit circle, contains at least one eigenvalue of U . For the two
outer arcs (ae; 1 ae) the statement follows directly from Theorem 2.4.
The last result we mention in this section clarifies the question of how the eigen-values
of a unitary matrix change if the matrix is modified by a unitary differing from
I only by rank one. For the Hermitian case, the answer is given, e.g., in [17, chapter
12.5.3]. For the unitary case we obtain
Theorem 2.7. Let U; S 2 C n\Thetan be unitary matrices and S such that
Then the eigenvalues of U and US interlace on the unit circle.
Proof. See [4, section 6].
3. Unitary upper Hessenberg Matrices. It is well known that any (unitary)
n \Theta n matrix can be transformed to an upper Hessenberg matrix H by a unitary
similarity transformation Q. If the first column of Q is fixed and H is an unreduced
upper Hessenberg matrix with positive subdiagonal elements (that is h i+1;i ? 0),
then the transformation is unique. Any n \Theta n unitary upper Hessenberg matrix
with nonnegative subdiagonal elements can be uniquely parameterized by
parameters. This compact form is used in [1, 3, 9, 11, 14, 19, 20, 21, 22, 23, 32] to
develop fast algorithms for solving the unitary eigenvalue problem.
Let
with
e
with
The product
oe
is a unitary upper Hessenberg matrix with positive subdiagonal elements. Conversely,
n\Thetan is a unitary upper Hessenberg matrix with positive subdiagonal elements,
then it follows from elementary numerical linear algebra that one can determine matrices
Gn such that e
I . Since H as a unitary
matrix has a unique inverse, this has to be e
1 . Thus H has a unique
factorization of the form
Gn (fl n
The Schur parameters ffl k g n
k=1 and the complementary Schur parameters foe k g n
can
be computed from the elements of H by a stable O(n 2 ) algorithm [19]. In statistics
the Schur parameters are referred to as partial correlation coefficients and in signal
processing as reflection coefficients [2, 11, 12, 25, 29, 30, 33].
If oe we have the direct sum decomposition
Hence, in general oe 1 oe 2 :::oe assumed if the factorization (3.1) is used to
solve a unitary eigenvalue problem. Such a unitary upper Hessenberg matrix is called
unreduced. If is an eigenvalue of an unreduced Hessenberg matrix, then its geometric
multiplicity is one [17, Theorem 7.4.4]. Since unitary matrices are diagonalizable, no
eigenvalue of an unreduced unitary upper Hessenberg matrix is defective, that is, the
eigenvalues of an unreduced unitary upper Hessenberg matrix are distinct.
We will adapt the general theorems given in the last section to the more specific
case of unitary upper Hessenberg matrices. Let H 2 C n\Thetan be a unitary upper Hessenberg
matrix with positive subdiagonal elements,
as
where H 11 is the k \Theta k leading principal submatrix. From Theorem 2.4 we obtain that
the modified kth leading principal submatrix of H , H
is unitary (if ae 2 C; not an eigenvalue of H). As H k is a unitary upper
Hessenberg matrix, we can factor H
Taking
a closer look at H k reveals that H k differs from H 11 only in the last column. Hence the
modification of H 11 to H k is equivalent to a modification of the reflection coefficient
The following theorem by Bunse-Gerstner and He characterizes the correct choice of
the parameter
Theorem 3.1. Let n\Thetan be a unitary upper Hessenberg
matrix with positive subdiagonal elements. For
as in (3.2). Let ae 2 C; not be an eigenvalue of H. Define parameters i l (ae);
(3.
Then
be the eigenvalues of H k (fl
respectively, where the eigenvalues are numbered starting at ae moving counterclockwise
along the unit circle. Then for each the eigenvalue i lies on the arc
Proof. The interlace property follows directly from Theorem 2.4. For the rest of
the proof see [10].
Using Theorem 2.6 and 3.1 we obtain
Theorem 3.2. Let n\Thetan be a unitary upper Hessenberg
matrix with positive subdiagonal elements. Let ae 2 C; not be an eigenvalue
of H. For ng let H is defined as
in (3.3). Let be the eigenvalues of H k . Then for each arc
on the unit circle contains at least one eigenvalue of H, where
ae.
Moreover,we obtain
Theorem 3.3. Let n\Thetan be a unitary upper Hessenberg
matrix with positive subdiagonal elements. Then for any i 2 C; there exists a
cutting point ae 2 C; such that the eigenvalues of H
H have the interlace properties with respect to ae on the unit circle given by Theorem
3.1 and 3.2.
Proof. We will show that ae 7! i k (ae) is an automorphism on the unit circle, this
proves the theorem. Note that for unitary upper Hessenberg matrices with positive
subdiagonal elements we have jfl
Obviously
ae 7! i
is bijective on the unit circle. The same is true for the mapping
2:
Hence ae 7! i k (ae) is a one-to-one mapping of the unit circle onto itself.
The statement of the above theorem can be summarized as follows: Any leading
principal submatrix of a unitary upper Hessenberg matrix with positive lower
subdiagonal elements can be modified to be unitary by replacing the last reflection
coefficient with a parameter on the unit circle. No matter how this parameter is
chosen, there is always a cutting point ae on the unit circle such that the eigenvalues
of the modified leading principal submatrix and those of the entire matrix satisfy the
interlace properties given by Theorem 3.1 and 3.2.
Disregarding the cutting point ae and the two arcs formed with it, Theorem 3.3 implies
the following corollary.
Corollary 3.4. Let n\Thetan be a unitary upper Hessenberg
matrix with positive subdiagonal elements. For
ng. Then every arc on the unit circle formed
by two eigenvalues of H k contains an eigenvalue of H.
In particular the above theorems show that the eigenvalues of two consecutive
modified leading principal submatrices H k and H k+1 of a unitary upper Hessenberg
matrix with positive subdiagonal elements interlace on the unit circle. More specifi-
cally, consider the modified leading principal submatrices H
and H n. The eigenvalues
of H k and H k+1 interlace with respect to the cutting point ae on the unit circle
where ae is given by
The remaining question is: how strongly do the eigenvalues of H k (fl
depend of the choice of i? We present some results on this dependence on the last
reflection parameter.
Theorem 3.5. Let H a
upper Hessenberg matrices with positive subdiagonal elements, ji a
1. The eigenvalues of H a and H b interlace on the unit circle.
2. (H a where the eigenvalue variation (U; B) is defined by
i2f1;:::;ng
permutation of
the i 's being the eigenvalues of U and the i 's those of B.
3. Let a
n and b
n be the eigenvalues of H a and H b . Let
be the Schur decomposition of H a , S
i;j=1 . Then for
min
a
ji a
Proof.
1. We have
S is unitary and rank(In \Gamma 1. According to Theorem 2.7, the eigenvalues
of H a and H b interlace on the unit circle.
2. As the matrices H a and H b differ only in the last column we have
Gn (i a
Since H a and H b are unitary, the statement 2: follows from the following
theorem of Bhatia/Davis [5]:
For all constant multiplies of two unitary
matrices Q and V we have
(For a completely different proof and extension of the result to multiples of
unitaries see [6]. When U and B are Hermitian, the above inequality is a
classical result of Weyl).
3. H b is unitary and therefore unitarily diagonalizable. The first inequality
follows directly from the following easy to prove result [8, Satz 1.8.14]:
Let A 2 C n\Thetan be diagonalizable,
and
Then
min
i2f1;:::;ng
Furthermore we obtain
Gn (i a
Hence, eigenvalues of a unitary upper Hessenberg matrix, whose eigenvectors have
a small last component, are not sensitive to changes in the last reflection parameter.
4. Numerical Examples. Numerical experiments are presented to elucidate
the statements of Section 3. The eigenvalues of a unitary upper Hessenberg matrix
H are compared with the eigenvalues of modified kth leading principal submatrices
H k for different dimensions k. The essential statements of Section 3 can be observed
clearly:
ffl Between two eigenvalues of H k on the unit circle there lies an eigenvalue of
(Corollary 3.4).
ffl The eigenvalues of unitary upper Hessenberg matrices, whose corresponding
eigenvectors have a small last component, are not sensitive against changes
of the last reflection coefficient. (Theorem 3.5).
All computations were done using MATLAB 1 on a SUN SparcStation 10.
A unitary upper Hessenberg matrix constructed
from 20 randomly chosen reflection coefficients C. The eigenvalues
j of H lie randomly on the unit circle. The eigenvalue j of the modified kth
leading principal submatrices H were computed for different
dimensions
For the first example
was chosen. The eigenvalues of H and H k are
plotted for in the following typical figure. The eigenvalues of H
are marked by 'o', the eigenvalues of H k by '*'.
1 MATLAB is a trademark of The MathWorks, Inc.
For the second example a random complex number chosen. The
following figure displays the same information as before.
Corollary 3.4 states that every arc on the unit circle formed by two eigenvalues of
contains an eigenvalue of H . This can be seen in the above figures. Comparing the
results of the two examples presented, one observes that independent of the choice
of i k the same eigenvalues of H are approximated. In Theorem 3.5 it was proven
that if the last component of an eigenvector of a unitary upper Hessenberg matrix is
small, then the corresponding eigenvalue is not sensitive against changes in the last
reflection coefficient. Individual bounds for the minimal distance of each eigenvalue
a
of H to the eigenvalues b
are given
min
j2f1;:::;ng
a
is the 'th component of the eigenvector for the ith eigenvalue
of H a . This means that if there is an eigenvalue a
of H a such that the last component
of the corresponding eigenvector is small, then any unitary upper Hessenberg matrix
of the form H b will have an eigenvalue b
j that is close to a
The following table reports the minimal distance between each eigenvalue fl k
i of
and the eigenvalues rand
(where randomly chosen as above) as well as the error bounds for
10. The absolute difference between i a and i b was ji a
min
j2f1;:::;ng
6 2.4628e-03 7.1265e-02
9 8.4488e-03 6.0709e-01
Comparing the actual minimal distance with the error bound one observes that the
approximations are much better than the error bound predicts.
The same results can be observed for larger unitary upper Hessenberg matrices
H . Moreover, one can observe that the eigenvalues of the modified leading principal
submatrices H 0
k interlace on the unit circle with respect to a cutting point
ae.
5. Concluding Remarks. In this paper, we have proved that the angles ' k
associated with the eigenvalues j of a unitary matrix U can be characterized by
Rayleigh quotients. An inclusion theorem for the eigenvalues of symmetric matrices
given by Kahan was adapted to the unitary case. We discussed the special case of
unitary Hessenberg matrices, which is important for certain applications. We proved
that every arc on the unit circle formed by two eigenvalues of a modified kth leading
principal submatrix of a unitary upper Hessenberg matrix contains an eigenvalue of
the complete matrix. Results on the dependence of the eigenvalues of unitary upper
Hessenberg matrices on the last reflection coefficient are given.
Parts of this paper (Section 2 and most of Section first appeared in [7]. Bohn-
horst analyses the connection between a unitary matrix U and its Cayley transformation
more closely with the help of structure ranks.
Acknowledgments
. All three authors owe a special thanks to Ludwig Elsner
for stimulating discussions and many helpful suggestions.
--R
On the Eigenproblem for Orthogonal Matrices
An Implementation of a Divide and Conquer Algorithm for the Unitary Eigenproblem
On the spectral decomposition of Hermitian matrices modified by low rank perturbations with applications
A. bound for the
Beitrage zur numerischen Behandlung des unitaren Eigenwertproblems
Schur Parameter Pencils for the Solution of the Unitary Eigenproblem
On a Sturm sequence of polynomials for unitary Hessenberg matrices
Computing Pisarenko Frequency Estimates
Speech modelling and the trigonometric moment problem
Some devices for the solution of large sets of simultaneous linear equations (with an appendix on the reciprocation of partioned matrices)
Global convergence of the QR algorithm for unitary matrices with some results for normal matrices
Perturbation and Interlace Theorems for the Unitary Eigenvalue Prob- lem
Matrix Computation
Positive Definite Toeplitz Matrices
A Divide and Conquer Algorithm for the Unitary Eigenproblem
Convergence of the Shifted QR Algorithm for Unitary Hessenberg Matrices
Matrix Analysis
A Tutorial Review
A Survey of Matrix Theory and Matrix Inequalities
Schur complements and statistics
The Symmetric Eigenvalue Problem
The Retrieval of Harmonics from a Covariance Function
Fast Approximation of Dominant Harmonics by Solving an Orthogonal Eigenvalue Problem
Discrete Least Squares Approximation by Trigonometric Polynomials
Bestimmung der Eigenwerte orthogonaler Matrizen
Duality theory of composite sinusoidal modelling and linear prediction
The algebraic eigenvalue problem
--TR | unitary eigenvalue problem;perturbation theory |
347479 | Speed is as powerful as clairvoyance. | We introduce resource augmentation as a method for analyzing online scheduling problems. In resource augmentation analysis the on-line scheduler is given more resources, say faster processors or more processors, than the adversary. We apply this analysis to two well-known on-line scheduling problems, the classic uniprocessor CPU scheduling problem 1 |ri, pmtn|&Sgr; Fi, and the best-effort firm real-time scheduling problem 1|ri, pmtn| &Sgr; wi( 1- Ui). It is known that there are no constant competitive nonclairvoyant on-line algorithms for these problems. We show that there are simple on-line scheduling algorithms for these problems that are constant competitive if the online scheduler is equipped with a slightly faster processor than the adversary. Thus, a moderate increase in processor speed effectively gives the on-line scheduler the power of clairvoyance. Furthermore, the on-line scheduler can be constant competitive on all inputs that are not closely correlated with processor speed. We also show that the performance of an on-line scheduler is best-effort real time scheduling can be significantly improved if the system is designed in such a way that the laxity of every job is proportional to its length. | Introduction
We consider several well known nonclairvoyant
scheduling problems, including the problem of minimizing
the average response time [13, 15], and best-effort
firm real-time scheduling [1, 2, 3, 4, 8, 11, 12, 18].
(We postpone formally defining these problems until
the next section.) In nonclairvoyant scheduling some
relevant information, e.g. when jobs will arrive in the
future, is not available to the scheduling algorithm A.
The standard way to measure the adverse effect of this
lack of knowledge is the competitive ratio:
I
Opt(I)
where A(I) denotes the cost of the schedule produced
by the online algorithm A on input I, and Opt(I) denotes
the cost of the optimal schedule. The competi-
Supported in part by NSF under grant CCR-9202158.
kalyan@cs.pitt.edu, http://www.cs.pitt.edu/-kalyan
y Supported in part by NSF under grant CCR-9209283.
tive ratio for a problem is then
min
A
I
Opt(I)
where the min is over all online algorithms. The standard
way to interpret the competitive ratio is as the
payoff to a game played between an online algorithm
and an all-powerful malevolent adversary that specifies
the input I.
One of the primary goals of any analysis is to identify
what works well in practice. Competitive analysis
has been criticized because it often yields ratios that
are unrealistically high for "normal" inputs and as a
result it can fail to identify the class of online algorithms
that work well. The scheduling problems that
we consider are good examples of this phenomenon
in that their competitive ratios are unbounded, while
there are simple nonclairvoyant algorithms that perform
reasonably well in practice. We explain this phenomenon
by adopting what we call the weak adversary
model, which assumes that the speed of the processor
used by the nonclairvoyant scheduler is (1
the speed of the processor used by the clairvoyant ad-
versary, where ffl ? 0. We define the ffl-weak competitive
ratio of a problem to be:
min
A
I
where the subscripts denote the speed of the processor
used by the corresponding algorithm.
The original motivation for the standard competitive
ratio was to use the divergence of the online al-
gorithm's output from optimal as a measure of the
adverse effect of nonclairvoyance. Not only does the
ffl-weak competitive ratio give us another measure, but
also suggests a practical way to combat the adverse
effect of nonclairvoyance. If a problem has a small ffl-
competitive ratio for some moderate ffl then this
means that a moderate increase in processor speed will
effectively buy the power of clairvoyance. Therefore,
the weak adversary model gives the system designer
a practical way, increasing the speed of the processor,
to improve the performance of the system.
On "normal" inputs, one would intuitively expect
that the offline performance of the system would not
degrade drastically if the speed of the processor is increased
slightly. If an algorithm has a bounded ffl-
competitive ratio then it has a bounded competitive
ratio for all inputs I where Opt 1 (I)=Opt1+ffl (I) is
bounded. Thus an algorithm with a bounded ffl-weak
competitive ratio has a bounded competitive ratio for
all inputs that fall under this formulation of "normal".
We give algorithms for these scheduling problems
that have ffl-weak competitive ratios that are solely a
function of ffl, and not the input I. Furthermore, as
ffl increases the ffl-weak competitive ratios quickly approach
one.
Previous and Current Results
Our generic scheduling problem consists of a collection
Jng of independent jobs to be run
on a single processor. (While these results extend to
the multiprocessor setting, we restrict our attention
to a single processor for simplicity. ) Each job J i has
a release time r i and a length x i . J i can not be run
before time r i . The time required to complete J i is x i
divided by the speed of the processor. We assume that
the online/nonclairvoyant scheduler is not aware of J i
. We consider only preemptive scheduling,
that is, a job can always be restarted from the point of
last execution. We assume that such context switches
require no time.
The problem of minimizing the average response
time of the jobs is a well known and widely studied
problem in operating system scheduling (see for example
[7, 14]). We assume that the nonclairvoyant
scheduler does not learn x i at time r i , and more gen-
erally, can not deduce x i until it has run J i to com-
pletion. The completion time c i of a job J i is the time
at which J i has been allocated enough time to finish
execution. Similarly, the response time is w
and the idle time for a speed s processor is w
For the problem of minimizing the average response
the deterministic competitive ratio
is \Omega\Gamma n 1=3 ), and the randomized competitive ratio is
n) [15]. It can easily be shown that any algorithm
that doesn't unnecessarily idle the processors
has a competitive ratio of O(n). Surprisingly, this is
the best known upper bound on the competitive ra-
tio, even allowing randomization. The competitive ratio
for the commonly used Round Robin algorithm is
In section 3, we first consider the queue size as a
function of time. Define QA (t; s) as the set of jobs that
have been released before time t, but have not been
finished by algorithm A by time t assuming that A is
using a speed s processor. We show that for every non-
clairvoyant scheduling algorithm A there is an input
I and a time t such that jQA (t; 1)j=jQ Opt (t;
set cardinality. We then give an on-line
algorithm Balance, B for short, that guarantees
that at all times that jQB (t; 1
ffl jQOpt (t; 1)j.
This implies that Balance has an ffl-weak competitive
ratio of 1
ffl for the problem of minimizing the
average response time. In contrast, we show that the
ffl-weak competitive ratio of Round Robin is \Omega\Gamma n 1\Gammaffl ),
1. We then assume that the nonclairvoy-
ant scheduler is equipped with a unit speed processor
and an ffl speed processor, instead of a (1+ffl) speed pro-
cessor. (Here we are assuming ffl ! 1.) In this case we
give a nonclairvoyant scheduling algorithm Balance2
with average response time at most 1+ 1
ffl times the average
response time of the adversary. This means that
a nonclairvoyant scheduler with a supercomputer and
an old 386 PC can be constant competitive against a
clairvoyant scheduler with only a supercomputer. Fi-
nally, we demonstrate that Balance is fair every job
it sees by proving that the maximum idle time of Balance
is quite comparable to that of offline.
In best-effort firm real-time scheduling each job J i
now has a deadline d i , and a benefit b i , in addition
to a release time and an execution time. It is also
useful to define the value density of a job J i to be
and the laxity of J i for a speed s processor
to be d which is the maximum amount
J i can be delayed if it is to be completed. Since real-time
systems are embedded systems, the scheduler is
generally aware in advance of the jobs that it may re-
ceive. Thus, the standard assumption is that at time
r i the scheduler learns of x i , d i , and b i . If J i is finished
by time d i then the algorithm receives a benefit
of b i , otherwise no benefit is gained from this job. The
goal of the scheduler is to maximize the total benefit
of the jobs that it completes. Since this is a maximization
problem, the competitive ratio definitions in
the introduction have to be modified by inverting the
ratios. So for example, the competitive ratio for this
problem is then
min A
I
Opt(I)
The deterministic competitive ratio for this problem
is \Theta(\Phi) [3, 4, 11, 18], and the randomized competitive
ratio is \Theta(min(log \Phi; log \Delta)) [8, 12], where the
importance ratio \Phi is the ratio of the maximum value
density of a job to the minimum value density of a
job, and \Delta is the ratio of the length of the longest
job to the length of the shortest job. The competitive
ratio is unbounded even in the special case that each
In section 4, we first assume that both the nonclair-
voyant scheduler and the adversary have unit speed
processors, and that the laxity of each job J i is at
least fflx i . An upper bound on the standard competitive
ratio, under this laxity assumption, will also upper
bound the ffl-weak competitive ratio since any job
J i that doesn't have laxity at least fflx i for a (1
speed processor can't be finished by a unit speed pro-
cessor. This formulation also has the added advantage
of showing the effect of laxity. Under these laxity as-
sumptions, we give an algorithm Slacker that has
a competitive ratio that is only a function of ffl, and
approaches three as ffl increases. These results show
that if a real-time system is designed so that every job
has laxity that is a reasonable fraction of the execution
time of that job, then the resulting competitive ratio is
reasonably small. The effect of laxity on the competitive
ratio in the special case of
in [1, 6]. We then show that the ffl-weak competitive
ratio for Slacker approaches one as ffl increases.
The weak adversary model, comparing an online algorithm
against a less powerful but more knowledgeable
adversary, has been considered before in query-response
problems such as the k-server problem and
its special cases (e.g. [17, 19]), and online weighted
matching [9]. In each case the adversary is handicapped
by having fewer servers. One can argue that
the weak competitive ratio is essentially what is called
the comparative ratio in [10]. However, the results
in [10] are really of a different flavor in that they are
primarily concerned with the effect of partial clairvoy-
ance. Other methods have been suggested to address
the limitations of competitive analysis. These methods
include restricting the input distribution to satisfy
some special properties (e.g. [10, 16]), and comparing
the cost of a solution produced by an online algorithm
on input I to the worst-case optimal cost of any input
of the same size as I [5].
3 Average Response Time
The following well known lemma explains why we
first consider the queue size.
Lemma 3.1 For any scheduling algorithm A with
a speed s processor, the total response time is
Lemma 3.2 For every nonclairvoyant scheduling algorithm
A there is an input I and a time t such that
Proof
jobs arrive at time t 0 . One job arrives
at time t 1. The adversary sets the jobs
lengths so that online hasn't finished any jobs by time
t . One can show that if the adversary always runs
the shorter job, then it will always have at most two
active jobs. The key point to note is that at time t i ,
the sum S of the remaining unfinished lengths
of the two jobs that the adversary has in its queue
satisfies
We now give an online algorithm Balance that
guarantees that its queue size is not too much more
than optimal under the weak adversary scenario.
Algorithm Balance : For any job J i and time t we
define to be the amount of time that Balance
has executed J i before time t. At all times t, Balance
splits the processing time equally among all jobs J i
that have minimum
Our analysis of Balance is based on the following
lemma.
Lemma 3.3 Let B be the algorithm Balance. At any
point t in time, jQB (t; 1
ffl jQOpt (t; 1)j.
The proof of this lemma follows from the ensuing
chain of reasoning. Let UB be the set of jobs
unfinished by Balance, and UA be the set of jobs
unfinished by the adversary at time t mentioned in
Lemma 3.3. Intuitively, the adversary can use time
that Balance spent on jobs in UA to finish jobs in
UB \GammaU A . We need to show that the weak adversary assumption
means that in order to borrow enough time
to finish jobs in UB \Gamma UA it must be the case that
UA is reasonably large. We say that a job J i can
immediately borrow from another job J j , denoted by
Balance ran J j at some time t 0 satisfying
then at time t 0 the adversary could have been executing
Balance was executing J j . The borrow
relation, denoted J i
is the transitive closure
of the immediately borrow relation. We define
g. Intuitively, if the adversary
transfers some time from a J j 2 UA to a J
then
Lemma 3.4 Let J i be a job that Balance saw but
did not complete by time t. For any job J
Proof Suppose there is a job J
that . If there are many such jobs J j ,
then select the one that can be reached from J i by
a shortest path P in the directed unweighted graph
induced by the relation immediately borrow. Let J k
be the job in P immediately before J j . So J k /- J j .
By the definition of P ,
x be the last time that J j was run before time t. Then
notice that it must be the case c k ? x or J k would not
be J j 's predecessor in P . Hence,
By the definition of Balance, if J k /- J j , then k
. Hence, we
deduce We reach a contradiction since
pg. Since the adversary
completes jobs in UB \GammaU A , the total time spent by
Balance on jobs in UA must be at least
We partition the time that Balance spent on jobs in
such that the cumulative
time in the ith class is at least ffl . Note that T i
could be a collection of time intervals. We call a partition
good if there is no job X 2 UA such that some
portion of time spent by Balance on X is included
in T i and kXk t ?kJ i k t .
Lemma 3.5 There always exists a good partition.
Proof be the
earliest release time of a job in B(X). Let UB \Gamma
where the indexing is such that
j. By the definition of
the relation immediately borrow, it must be the case
that any job executed by Balance during the time
interval [t(J i ); t] must be a member of B(J i ). Also,
observe that for each i in the range 1
must run and finish jobs in B(J i during
the interval [t(J i ); t]. Therefore, for each 1 - i - p, it
must be the case that the cumulative amount of time
spent by Balance to jobs in B(J must be at
least
Observe that if consequently
by induction on i, time
spent on jobs in UA by Balance can be distributed
to jobs in UB \Gamma UA such that for each job J
gets ffl units of time from jobs in B(J i )"U A . The
result follows since, by lemma 3.4, kXk t -kJ i k t for any
The proof of the lemma 3.5 intuitively suggests
that, if the adversary is going to finish a job
needs to raise ffl k J i k t units
of times from jobs J 3.4 we
know that This suggests that we analyze
the following problem to get lower bound on number
of jobs in UB .
The Politician Problem: There are n politicians
trying to raise money from m contributors. The ith
politician must raise fflS i dollars, and the j contributor
has C j dollars to contribute. The election rule
says that the jth contributor can contribute to the
ith politician only if C j - S i . A politician can raise
money from many contributors and a contributor can
give money to several politicians.
Lemma 3.6 If there is a solution for the above Politicians
Problem then ffln - m.
Proof be the contribution from
the the j contributor to the i politician. Since
is the fraction of fflS i that the ith politician
got from the j contributor, and since every politician
is successful, it is the case that,
Now by the election rule,
Here
is the fraction of C j that the j contributor
gave to the ith politician.
Proof Applying the lemma 3.5
we know that for each job J must
find ffl units of times from jobs J
By lemma 3.4 . Now applying lemma 3.6
to this case, we get jUA j
The following theorem then follows by lemma 3.1.
Theorem 3.1 The ffl-weak competitive ratio of Balance
with respect to average response time is at most
ffl .
We now show that the commonly used algorithm
Round Robin [7, 14] does not have a constant ffl-weak
competitive ratio for small ffl. Round Robin splits the
processing time evenly among all unfinished jobs.
Lemma 3.7 For the problem of minimizing the average
response time, the ffl-weak competitive ratio of Round
Robin is
Proof We divide time into stages. Let the ith
stage, start at time t i . We let t
ffl. There are two jobs of length (1+ ffl) released
at time t 0 , and one job is released at each time t i ,
length s(i) that is exactly the same length
as Round Robin has left on each of the previous jobs.
To guarantee that the adversary can finish the job
released at time t i by time t
We then get the recurrence
Expanding this we get The total response
time for the adversary is then \Theta(
which is \Theta(1). The total response time for Round
Robin is then \Theta(
The following theorem shows that Balance does
not overly delay any job to improve the performance
of average response time.
Theorem 3.2 The ffl-weak competitive ratio of Balance
with respect to maximum idle time is 1
ffl .
Proof t be the time at which Balance
completed a job J i for which the idle time is maxi-
mum. By shortening other unfinished jobs, let us assume
that Balance completed all jobs by time t. Let
be the idle time experienced by
using Balance. By lemma 3.4, the amount
of time spent by Balance on any job during (r
is at most . Due to the difference in speed, it
must be the case that the adversary finished a job J j
at time (1 ffl)t. If Balance did not run J j during
d. On the other hand if Balance
ran J j during (r
Therefore, t is at most t \Gamma d. Notice that J j must
have arrived on or before t\Gamma . Therefore, the idle
time incurred by the adversary for J j must be at least
We now assume that the nonclairvoyant scheduler
equipped with a unit speed processor and an ffl speed
processor can be almost as competitive as if it was
equipped with a (1 speed processor. Here we are
assuming ffl ! 1. We further assume for simplicity that
1=ffl is an integer.
Algorithm Balance2 : Run the job J i that has
been run the least on the unit speed processor. Run
the job, other than J i , that has been run the least on
the ffl speed processor.
The analysis of Balance2 follows the same line as
the analysis of Balance. We modify the definition
of immediately borrow in the following way. A job J i
can immediately borrow time from another job J j if at
running J j while either, Balance2 was not running
running J i on a processor that is slower than
the processor that J j was being run on. We define UA ,
UB , borrow, and B(J i ) as before. We define to be
the initial length of J i that has been executed before
time t by Balance2.
Lemma 3.8 Let J i be a job that Balance saw but
did not complete by time t. For any job J
The Modified Politicians Problem: There are n
politicians and m original contributors. Let S 1 -
must
refund at least fflS i+1 dollars to the contributors. The
jth contributor requires C j dollars in refunds. The
election rule says that the ith politician can refund
money to the jth contributor only if C j - S i .
Lemma 3.9 If there is a solution for this modified politicians
problem then m - ffl(n \Gamma 1).
Proof Assume without loss of generality that
Assume that S i 1
refunds to C j1 and
refunds to C j2 , with
such a situation a swap. By transitivity we can have
refund to C j2 and S i 2
refund to C j1 . It is easy to
see that we can assume without loss of generality that
there is a solution to the modified politician problem
without any swaps. Let us multiply the refund of each
politician by a factor of 1=ffl. Simultaneously, we also
increase the pool of potential contributors by a factor
of 1=ffl by replacing each original contributor by
1=ffl identical contributors. By repeating the previous
assignment 1=ffl times, the politicians can still be successful
in refunding the money.
We prove by induction that for any i, 1
there are at least i contributors among the m=ffl contributors
that get refunds from politicians 1 through
i, and those i contributors will not get refunds from
other politicians. Assume
gets all of its refunds from the first politician by
the no swapping assumption. Otherwise, if
then C 1 cannot accept refunds from politicians
Now assuming that the hypothesis holds for
show that the hypothesis also holds for i. By the induction
hypothesis, contributors 1 through
get a refund from the ith politician. Let C ff(i) be
the highest contributor that got a refund from the ith
politician. If C ff(i) ? S i+1 , then C ff(i) cannot get a
refund from politicians On the other hand,
if C ff(i) - S i+1 then the i contributor gets all of its
refund from the i politician by the no swapping assumption
Lemma 3.10 Let B be the algorithm Balance2. At
any point t in time, jQB (t; 1+
ffl jQOpt (t; 1)j.
Proof Once again we are going to reduce to
the modified politicians problem, where the members
of UA are the contributors, and the members of UB \Gamma
UA are the politicians. Let UB \Gamma
j. So the jobs are ordered
by increasing order of release times. Since at least two
jobs are unfinished during the time interval (r
it must be the case that Balance2 was running both
processors throughout this period.
assume that Balance2 was running
two jobs J a and J b at a time t 0 between time r ff(i) and
t. Then we claim J a g. If
neither J a or J b is J ff(i\Gamma1) then both are in B(J ff(i\Gamma1) )
by definition. Otherwise if say J
are all being executed in round robin
fashion, and the claim once again follows. Notice that
Assume that the length of each job J ff(i)
is exactly . This is the best case for the adver-
sary. Then Balance2 and the adversary executed the
same length of each job in (UB \Gamma UA
Hence, the extra ffl(t \Gamma r ff(2) ) work (the refunds) done
by Balance2 must go to jobs in UA . Consider how
this might be distributed. For each
must be the case that the extra work must during the
period (r must be distributed to jobs in
We think of J ff(i\Gamma1) as giving this
refund to jobs in B(J ff(i\Gamma1) . The election rule is
satisfied by lemma 3.8. We now apply the modified
politicians lemma, and the rest of the argument is as
before.
Theorem 3.3 The average response time for Bal-
ance2, with a unit speed processor and an ffl ! 1 speed
processor, is at most 1
ffl times the average response
time of an adversary given only a unit speed processor.
Proof This theorem follows by applying
lemma 3.10 and noting that the adversary must be
running some jobs for a duration of ffl' even after Balance
completed all jobs at time '.
4 Real-time Scheduling
Before describing the algorithm Slacker we need
to introduce some definitions and notation. Recall
that we first assume that both the nonclairvoyant
scheduler and the clairvoyant adversary have unit
speed processors, and that the laxity of each job J i is
at least fflx i . For notational convenience, let
A job J i is viable at time t for a scheduling algorithm
A if A can finish J i before d i , that is, if A has run J i
for at least x units of time. Define the slack
of a job J i at time t by s i t. A job is
fresh at time t if s Otherwise, we say
the job is stale at time t. Let c ?
constant that we define later. Define the density class
of a job J i to be blog c v i c. Assuming that we normalize
so that the smallest value density is one, the density
classes then range from 0 to blog c \Phic. If X is a set of
jobs then let kXk=
denote the total benefit
of jobs in X. Let Opt be the set of jobs finished by
the adversary.
Algorithm Slacker: At time r i Slacker switches
to J i if and only if J i is in a higher density class than
the job J j that Slacker is currently running. If this
happens, J j is saved as the representative job for density
class blog c v j c. If Slacker finishes a job J i at
time t, then let ff be the largest integer such that there
is currently a fresh job in density class ff or a viable
representative job in class ff. If there is a viable representative
job J i in density class ff then Slacker
resumes execution of J i . Otherwise, Slacker starts
executing an arbitrary fresh job in density class ff.
Let S be the set of jobs completed by Slacker,
and R the set of jobs run by Slacker. S may be
a proper subset of R since Slacker may not finish
every job that it started.
Lemma 4.1 Let J i be an arbitrary representative
job in density class ff that Slacker did not com-
plete. Then for a period of at least d
units of time between r i and d i Slacker was running
a job in density class ff
Lemma 4.2 Assuming the density of each job is an integral
power of c,
Proof We imagine that each job J i 2 R has
an account associated with it. The account for a job
initially starts with b i points. All other accounts
start with zero points. We redistribute points
from accounts of jobs in S to accounts of jobs in R \Gamma S.
The argument is by reverse induction on the density
classes. First note that Slacker finishes every job in
density class blog c \Phic that it begins. Assume we now
are considering jobs in density class ff. Let J i
be a job that was the representative job for density
class ff between time t 1 and t 2 . If Slacker ran a
job J j in density class fi ? ff for t units of time between
are transferred
from J j 's account to J i 's account. By lemma 4.1, J i
borrowed for a total time of at least ffi x i , and hence
borrowed at least c ff x i points. Thus the account for
now contains at least b i points.
We now need to examine how much the account of
a job J i in density class ff can be depleted by jobs in
lower density classes. The representative jobs in density
class ff take at most c fi x i =ffi points from the
account of J i . Hence, the number of points remaining
in the account for J i is at least
Lemma 4.3 Assuming the density of each job is an integral
power of c, kOptk- 1+3ffi
Proof Assume that if the adversary ran a job
with density c ff for t units of time, we credit the adversary
with points regardless of whether it finished
the job or not. Define a job to be dense if it has density
c ff or greater. We then show that the total amount of
time the adversary spent running dense jobs is at most
times the time that Slacker was running
dense jobs. We divide time up in the following way.
Let oe 0 be the first point of time where Slacker starts
running a dense job. Let - i , i - 0, be the first point of
time after oe i where Slacker is not running a dense
job. Let oe i , i - 1, be the first point in time after
after - i\Gamma1 that Slacker begins running a dense job.
Note that no dense job can arrive between any - i and
oe i+1 . Consider the longest dense job J j that arrived
between a oe i and a - i , and that Slacker did not run.
Then J i had to have been stale at time - i . Hence,
. This means
that the time that the adversary is running dense jobs
that Slacker didn't run is at most 1+2ffi
times the
time that Slacker was running dense jobs. We must
add one to this ratio for the jobs that both the adversary
and Slacker ran. The lemma then follows by
reverse induction on ff.
Theorem 4.1 Under the assumption that every job J i
has laxity at least fflx the competitive ratio of
Slacker is at most
Proof applying lemma 4.2
and lemma 4.3, and by removing the condition that
the density of a job is an integral power of c.
One can verify that by letting
, we get
a bounded competitive ratio for all ffi ? 0, and that
the competitive ratio goes to three as ffi increases. If
we go back to assuming that Slacker has a (1
speed processor, we can show that the ffl-weak competitive
ratio approaches one by modifying lemma 4.3
as follows. The proof is very similar to the proof of
lemma 4.3.
Lemma 4.4 Assuming the density of each job is an integral
power of c, and that Slacker has a (1 speed
processor,
5 Conclusion
We believe that the weak adversary model will be
useful in identifying online algorithms that work well
in practice for other types of problems. It is important
that the weakening of the adversary should be
done in such a way that the corresponding strengthening
of online algorithm can be achieved in practice.
It is worth mentioning that for the problems considered
in this paper, increasing the speed of the online
processor is not the only way to weaken the adver-
sary. For example, in the case of real-time scheduling,
it suffices to design the real-time system in such a way
that the laxity condition is satisfied.
Finally, we would like to mention that the weak
adversary model has been used recently to show that
the natural greedy algorithm, which works reasonably
well in practice, is almost optimal for online weighted
matching [9]. Traditional competitive analysis shows a
bound of \Theta(2 m ) whereas the weak adversary analysis
yields a bound of \Theta(log m) where m is the size of the
graph.
Acknowledgements
: The second author would like
to thank Daniel Mosse, Rege Colwell, Richard Su-
choza, and Dimitri Zorine for many helpful discussions
on real-time scheduling.
--R
"On improved performance guarantees through the use of slack times,"
"On-line scheduling to maximize task completions"
"On the competitiveness of on-line real-time task scheduling"
"On-line scheduling in the presence of overload"
"A new measure for the study of on-line algorithms"
"On-line real-time scheduling with laxities,"
Operating System Concepts
"Fault- tolerant real-time scheduling"
"The on-line transportation problem"
"Beyond competitive analysis,"
"D over :An optimal on-line scheduling algorithm for overloaded real-time systems"
"MOCA: a multiprocessor on-line competitive algorithm for real-time system scheduling,"
Operating Systems: Concepts and Designs
"Non- clairvoyant scheduling"
"A statistical adversary for on-line algorithms"
"Amortized efficiency of list update and paging rules"
"On-line scheduling of jobs with fixed start and end time"
"The k-server dual and loose competitiveness for paging,"
--TR
Amortized efficiency of list update and paging rules
Operating system concepts (3rd ed.)
On-line scheduling in the presence of overload
Operating systems: concepts and design
On the competitiveness of on-line real-time task scheduling
MOCA
On-line scheduling of jobs with fixed start and end times
Nonclairvoyant scheduling
Approximation algorithms for scheduling
Optimal time-critical scheduling via resource augmentation (extended abstract)
The art of computer programming, volume 1 (3rd ed.)
Scheduling for Overload in Real-Time Systems
Online computation and competitive analysis
Scheduling in the dark
Page replacement for general caching problems
Trade-offs between speed and processor in hard-deadline scheduling
Scheduling Algorithms
Speed is More Powerful than Claivoyance
Competitive Analysis of the Round Robin Algorithm
On-line Scheduling
The Online Transportation Problem
Fault-Tolerant Real-Time Scheduling
Maximizing Job Completions Online
Minimizing flow time nonclairvoyantly
Jitter Control in QoS Networks
--CTR
Mohamed Eid Hussein , Uwe Schwiegelshohn, Utilization of nonclairvoyant online schedules, Theoretical Computer Science, v.362 n.1, p.238-247, 11 October 2006
Jae-Hoon Kim , Kyung-Yong Chwa, Online deadline scheduling on faster machines, Information Processing Letters, v.85 n.1, p.31-37, January
Leah Epstein , Rob van Stee, Optimal on-line flow time with resource augmentation, Discrete Applied Mathematics, v.154 n.4, p.611-621, 15 March 2006
Marek Chrobak , Leah Epstein , John Noga , Ji Sgall , Rob van Stee , Tom Tich , Nodari Vakhania, Preemptive scheduling in overloaded systems, Journal of Computer and System Sciences, v.67 n.1, p.183-197, August
Thomas Erlebach , Alexander Hall, NP-hardness of broadcast scheduling and inapproximability of single-source unsplittable min-cost flow, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.194-202, January 06-08, 2002, San Francisco, California
Thomas Erlebach , Alexander Hall, NP-Hardness of Broadcast Scheduling and Inapproximability of Single-Source Unsplittable Min-Cost Flow, Journal of Scheduling, v.7 n.3, p.223-241, May-June 2004
Ho-Leung Chan , Tak-Wah Lam , Kar-Keung To, Non-migratory online deadline scheduling on multiprocessors, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Luca Becchetti , Stefano Leonardi , Alberto Marchetti-Spaccamela , Kirk Pruhs, Semi-clairvoyant scheduling, Theoretical Computer Science, v.324 n.2-3, p.325-335, 20 September 2004
Francis Y. L. Chin , Stanley P. Y. Fung, Improved competitive algorithms for online scheduling with partial job values, Theoretical Computer Science, v.325 n.3, p.467-478, 6 October 2004
Jae-Hoon Kim , Kyung-Yong Chwa, Non-clairvoyant scheduling for weighted flow time, Information Processing Letters, v.87 n.1, p.31-37, July
Guido Schfer , Naveen Sivadasan, Topology matters: smoothed competitiveness of metrical task systems, Theoretical Computer Science, v.341 n.1, p.216-246, 5 September 2005
Jason McCullough , Eric Torng, SRPT optimally utilizes faster machines to minimize flow time, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Bala Kalyanasundaram , Kirk R. Pruhs, Maximizing job completions online, Journal of Algorithms, v.49 n.1, p.63-85, 1 October
Jeff Edmonds , Kirk Pruhs, Broadcast scheduling: when fairness is fine, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.421-430, January 06-08, 2002, San Francisco, California
N. Bansal , K. Dhamdhere, Minimizing weighted flow time, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Bala Kalyanasundaram , Kirk R. Pruhs, Minimizing flow time nonclairvoyantly, Journal of the ACM (JACM), v.50 n.4, p.551-567, July
Chiu-Yuen Koo , Tak-Wah Lam , Tsuen-Wan Ngan , Kunihiko Sadakane , Kar-Keung To, On-line scheduling with tight deadlines, Theoretical Computer Science, v.295 n.1-3, p.251-261, 24 February
Anupam Gupta , Bruce M. Maggs , Florian Oprea , Michael K. Reiter, Quorum placement in networks to minimize access delays, Proceedings of the twenty-fourth annual ACM symposium on Principles of distributed computing, July 17-20, 2005, Las Vegas, NV, USA
Chiu-Yuen Koo , Tak-Wah Lam , Tsuen-Wan Ngan , Kar-Keung To, Competitive deadline scheduling via additional or faster processors, Journal of Scheduling, v.6 n.2, p.213-223, March/April
Tak-Wah Lam , Tsuen-Wan Johnny Ngan , Kar-Keung To, Performance guarantee for EDF under overload, Journal of Algorithms, v.52 n.2, p.193-206, August 2004
Chiu-Yuen Koo , Tak-Wah Lam , Tsuen-Wan Ngan , Kar-Keung To, Extra processors versus future information in optimal deadline scheduling, Proceedings of the fourteenth annual ACM symposium on Parallel algorithms and architectures, August 10-13, 2002, Winnipeg, Manitoba, Canada
Jeff Edmonds , Kirk Pruhs, A maiden analysis of Longest Wait First, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Michael H. Goldwasser, Patience is a virtue: the effect of slack on competitiveness for admission control, Journal of Scheduling, v.6 n.2, p.183-211, March/April
C. Greg Plaxton , Yu Sun , Mitul Tiwari , Harrick Vin, Reconfigurable resource scheduling, Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, July 30-August 02, 2006, Cambridge, Massachusetts, USA
Ryan Porter, Mechanism design for online real-time scheduling, Proceedings of the 5th ACM conference on Electronic commerce, May 17-20, 2004, New York, NY, USA
Jeff Edmonds , Suprakash Datta , Patrick Dymond, TCP is competitive against a limited adversary, Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures, June 07-09, 2003, San Diego, California, USA
Jeff Edmonds , Kirk Pruhs, A maiden analysis of longest wait first, ACM Transactions on Algorithms (TALG), v.1 n.1, p.14-32, July 2005
Kirk Pruhs, Competitive online scheduling for server systems, ACM SIGMETRICS Performance Evaluation Review, v.34 n.4, March 2007
Wun-Tat Chan , Tak-Wah Lam , Kin-Shing Liu , Prudence W. H. Wong, New resource augmentation analysis of the total stretch of SRPT and SJF in multiprocessor scheduling, Theoretical Computer Science, v.359 n.1, p.430-439, 14 August 2006
Chandra Chekuri , Ashish Goel , Sanjeev Khanna , Amit Kumar, Multi-processor scheduling to minimize flow time with resource augmentation, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Ho-Leung Chan , Tak-Wah Lam , Kin-Shing Liu, Extra unit-speed machines are almost as powerful as speedy machines for competitive flow time scheduling, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.334-343, January 22-26, 2006, Miami, Florida
Joan Boyar , Lene M. Favrholdt , Kim S. Larsen, The relative worst order ratio applied to paging, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Nikhil Bansal , Kirk Pruhs, Server scheduling in the Lp norm: a rising tide lifts all boat, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Joan Boyar , Lene M. Favrholdt , Kim S. Larsen, The relative worst-order ratio applied to paging, Journal of Computer and System Sciences, v.73 n.5, p.818-843, August, 2007
Sandy Irani , Kirk R. Pruhs, Algorithmic problems in power management, ACM SIGACT News, v.36 n.2, June 2005
Roughgarden , va Tardos, How bad is selfish routing?, Journal of the ACM (JACM), v.49 n.2, p.236-259, March 2002 | resource augmentation;multi-level feedback scheduling;scheduling |
347482 | Balanced sequences and optimal routing. | The objective pursued in this paper is two-fold. The first part addresses the following combinatorial problem: is it possible to construct an infinite sequence over n letters where each letter is distributed as evenly as possible and appears with a given rate? The second objective of the paper is to use this construction in the framework of optimal routing in queuing networks. We show under rather general assumptions that the optimal deterministic routing in stochastic event graphs is such a sequence. | Introduction
It is a rather general problem to consider a system with multiple resources and tasks. Tasks
can be performed by any resource and arrive in the system sequentially. The problem is to
construct a routing of the tasks to the resources to minimize a given cost function. Such
models are common in multiprocessor systems and communication networks, where the
cost function may be the combined load in the resources.
Here, we show that under rather general assumptions, the optimal routing sequence in
terms of expected average workload in each resource is given by a balanced sequence, that
is a sequence in which the option to route towards a given resource, is taken in an evenly
distributed fashion.
INRIA, BP 93, 2004 Route des Lucioles, 06902 Sophia Antipolis Cedex, France. E-mail: alt-
man@sophia.inria.fr. URL:http://www.inria.fr:80/mistral/personnel/Eitan.Altman/me.html
y INRIA/CNRS/INRIA, BP 93, 2004 Route des Lucioles, 06902 Sophia Antipolis Cedex, France. E-mail:
gaujal@sophia.inria.fr.
z Dept. of Mathematics and Computer Science, Leiden University, P.O.Box 9512, 2300RA Leiden, The
Netherlands. E-mail: hordijk@wi.leidenuniv.nl. The research of Arie Hordijk was done while he was on
sabbatical leave at INRIA, Sophia-Antipolis; it has been partially supported by the Minist#re Fran#ais de
l'#ducation Nationale et de l'Enseignement Sup#rieur et de la Recherche.
This motivates the -rst part of the paper which is essentially based on word combinatorics
and uses its speci-c vocabulary. This part relies heavily on results in [9, 17].
The problem of balanced sequences and exactly covering sequences has been studied using
combinatorial as well as arithmetic techniques [20, 23, 18, 7, 16].
However, in these studies, the analysis of balanced sequences was done per se and did
not present any motivations or applications. In particular, the use of these constructions
in discrete event dynamic systems may not have been discovered before the seminal work
of Hajek, [10] in 1985. This work solves the one dimensional case problem, further generalized
in [1]. The goal of the present paper is to extend the same type of results to a
multidimensional case, which is surprisingly more diOEcult and of dioeerent nature as the
one dimensional case.
This is done in the second part of this paper, where we will show an application of balanced
sequences for routing problems. This part uses results from convex analysis, mainly
from [10, 2, 1]. The main results that are used here are of two dioeerent kinds. First, we
use the fact that the workload as well as the waiting time of customers entering a (max,+)
linear system are multimodular functions, under fairly general assumption ( stationarity of
the arrival process and of the service times)([1]). Then, we use general theorems from [2]
that prove that multimodular functions are minimized by regular sequences. The superposition
of several such sequences being a balanced sequence, this is the basis of the main
result of this paper.
It is interesting to exhibit this link between balanced sequences and scheduling prob-
lems, such as routing among several systems.
More precisely, the paper is structured as follows. In the second section, we introduce
a formal de-nition of balanced sequence and we present an overview on their properties.
The section 3 makes the link between the notion of balanced sequences and the optimal
scheduling in networks and is also devoted to prove the optimality of balanced sequences
for routing customers is a multiple queue system. Section 4 presents special cases for which
the optimal rates can be computed.
Balanced sequences
In this section, we will present the notion of balanced sequences, which is closely related
to the notion of Sturmian sequences [14] as well as exactly covering sequences. This
presentation is not exhaustive and many other related articles can be consulted for further
investigation on this topic [22, 11, 3, 5, 20, 7]. Although the section is self-contained and
presents several result which are of interest by their own, we mostly focus the rate problem
(see Problem 2), which will be used in the application section (# 3).
2.1 Preliminaries
Let A be a -nite alphabet and A Z the set of sequences de-ned on A.
If u 2 A Z , then a word W of u is a -nite subsequence of consecutive letters in u:
. The integer k is the length of W and will be denoted jW j.
If a 2 A, jW j a is the number of a's in the word W .
De-nition 2.1. The sequence u 2 A Z is balanced if for any two words W and W 0 in u of
same length, and any a 2 A,
If a 2 A, we also de-ne the indicator in u of the letter a as the function ffi a
otherwise. The support in u of the letter a is the
set S a
For any real number x, bxc will denote the largest integer smaller or equal to x and
dxe will denote that smallest integer larger or equal to x.
Lemma 2.2. If a sequence u 2 A Z is balanced, then for any a 2 A, there exists a real
number
lim
p a is called the rate of ffi a (u).
Proof. Let us de-ne s
The rest of the proof is classical by sub-additivity arguments. The proof for fffi a (u) n g n60
is similar. The fact that both limits coincide is obvious.
Note that the sum of the rates for all letters in a sequence u is one.
a2A
Now, we can present the main result which founds the theory of balanced sequences.
Theorem 2.3 (Morse and Hedlund). Suppose a sequence d 2 f0; 1g Z is balanced with
asymptotic rate letter 1. The support S 1 of d satis-es one of the following
cases.
(a) (irrational case) p is irrational and that exists OE 2 R such that
(b) (periodic case) p 2 Q and there exists OE 2 Q such that
(c) (skew case) Z such that
or
Sequences for which the support is of the form S are called bracket
sequences. This theorem shows the relation that exists between balanced sequences and
bracket sequences. The irrational case is the easiest case and can be characterized (see
Theorem 2.17). The two rational cases are more diOEcult to study. Our main objective for
studying balanced sequences is their application in routing control (see sections 3 and 4).
In our analysis, only the right tail of a sequence, that is fu i g i?i 0
for some i 0 in N maters.
Also, a sequence u and a shifted version, u 0 will have the same asymptotic performance
(see equation (7)). Hence, we consider such two sequences as being equivalent. Note that
this allows us to consider that the skew case and the rational case are equivalent.
De-nition 2.4. A sequence d 2 f0; 1g Z is (ultimately) regular if there exists two real
numbers ' and p (and an integer k) such that for all (n ?
Note that d is equivalent to S
Theorem 2.5. Let u 2 A Z .
(i)- If ffi a (u) is regular for all a 2 A, then u is balanced.
(ii)- If u is balanced, then ffi a (u) is ultimately regular for all a 2 A.
Proof. (i) is straightforward.
(ii) is a direct consequence of Theorem 2.3, since in all three cases, the sequence ffi a (u) is
ultimately regular. An elementary proof of (ii) which does not use Theorem 2.3 can be
found in [21].
In [13], the sequence u is said to have the reduction property if ffi u (a) is regular for all
a 2 A. Corollary 2.5 shows that u has the reduction property (ultimately) if and only if it
is balanced.
2.2 Constant gap sequences
Constant gap sequences are strongly balanced sequences, in the following sense.
De-nition 2.6. A sequence G is constant gap if for any letter a, ffi a (G) is periodic, with
a period of the form 0
Note that this explains the fact that G is said to have constant gaps for the letter a,
since each a is separated from the next a in G by a constant number of letters.
Proposition 2.7. Constant gap sequences are balanced.
Proof. For each letter a, ffi a (G) is of the form (0 Therefore, ffi a (G) is regular with
Using the characterization of balanced sequences
given in Theorem 2.5, this shows that G is balanced.
Proposition 2.8. Constant gap sequences are periodic.
Proof. For each letter a, ffi a (G) is periodic with period p a . The period of G is lcm(p a ; a 2
A).
In the next lemma, we give a characterization of constant gap sequences that stresses
the fact that constant gap is some kind of strong balance.
Proposition 2.9. G is constant gap if and only if, for any two -nite words, W and W 0
included in G with jjW for each letter a, jjW j a \Gamma jW 0 j a j 6 1.
Proof. Let a be a letter in the alphabet.
First, assume that G is constant gap. If jW j a \Gamma jW 0 j a ? 2, then, necessarily, jW 2.
Conversely, let a be any two words in G with no a in the subwords
U and U 0 . If jU j ? jU 2. This is
a contradiction. Therefore jU G is constant gap.
Since a constant gap sequence is balanced, each letter appears with a given rate in the
sequence. note however that since a constant gap sequence is necessarily periodic, the rate
of each letter is rational.
As we will do in section 2.4 for the case of balanced sequences, we now address the
following question:
Problem 1: Given a set (p possible to
construct a constant gap sequence on N letters with rates (p
We will not solve this problem for a general N . We will only give some properties of
the set (p which will be useful in the following. A non-eoeective characterization
of such (p in [23], under the name exact covering sequences.
De-nition 2.10. The set of couples f(' called an exact covering sequence
if for every nonnegative integer n, there exists one and only one 1 6 i 6 N such
that
As a general remark, note that (p are rational numbers of the form p
with d the smallest period of G. Therefore, we have,
for each i, k i divides
d. By de-nition of the rates, we also have p a = 1=q a for all letters.
We have the following result.
Proposition 2.11. The rates (p are constant gap if there exists N numbers
called such that the couples f(' an exact covering
sequence.
Proof. This property is a simple rewriting of the fact that each letter a i in a constant gap
sequence appears every f' i Ng.
Now, suppose that f(' is an exact covering sequence. Then in the
series
the coeOEcient of x n in this series is equal to Therefore, we have
Using this characterization we have the following interesting property.
Lemma 2.12 ([18]). Assume f(' is an exact covering sequence and that
appears at least twice in the set q
Proof. The proof given here is similar to the discussion in [23] on exact covering sequences.
e 2i-=r for some integer r ? 1. By de-nition, w is a primitive r-th root of one. We
. The set is exactly the set g. Equation (1)
speci-ed for can be written
This implies that the set cannot be reduced to a single point since w is not
zero.
To give some concrete examples, we consider the cases where N is small. First, note
that in the case where the k i are not all equal, (assume k 1 is the largest of all), we have
where l 1 is the gap between two letters a 1 . This implies,
2: (2)
Proposition 2.13. There exists a constant gap sequence G with rates only
Proof. Let a be a letter in G with gap l. Since the alphabet contains only two letters,
1. This means
Proposition 2.14. There exists a constant gap sequence G with rates (p
only if the following holds: (p or (1=2; 1=4; 1=4) (up to a permu-
tation).
Proof. Assume that (p 1
Using Equation (2), l Therefore, the sequence obtained from G when
removing all the letters a 1 is constant gap. Applying Lemma 2.13 shows that . The
only solution is 1=4. The associated constant gap sequence is
(a 1 a 2 a 1 a 3
Proposition 2.15. There exists a constant gap sequence G with rates (p
only if fp belongs to the set (up to a permutation),
Proof. We give a sketch of an elementary proof of this fact. If the rates are all equal, then
note that Equation 2 implies that if the rates
are not all equal 2. We consider any letter a assume that the number of
a 1 's in between two a i 's is not constant and takes values m and then we have
on the other hand. This is impossible
since l 1 6 2. Therefore, the number of a 1 's in between two a i 's is constant. This is true
for all i. The sequence formed by removing all a 1 's is still constant gap. It has rates of
the form (1=3; 1=3; 1=3) or (1=2; 1=4; 1=4). From this point a case analysis shows that the
original sequence has rates (1=2; 1=4; 1=8; 1=8); (1=2; 1=6; 1=6; 1=6)or(1=3; 1=3; 1=6; 1=6)g by
inserting the letter a 1 in a constant gap sequence over the letters a 2 ; a 3 ; a 4 .
These few examples of constant gap sequences illustrate the fact that the rates in these
sequences have very strong constraints.
2.3 Characterization of balanced sequences
Several studies have been recently done on balanced (or bracket) sequences [13, 20, 21, 16,
15]. In [9], a characterization involving constant gap sequences is given.
Proposition 2.16 (Graham). Let U be a balanced sequence on the alphabet f0; 1g. Construct
a new sequence S by replacing in U , the subsequence of zeros by a constant gap
sequence G on an alphabet A 1 , and the subsequence of ones by a constant gap sequence H
on a disjoint alphabet A 2 . Then S is balanced on the alphabet A 1 [ A 2 .
Proof. We give a proof similar to Hubert's proof ([11]). Let a be a letter in A 1 (the proof
is similar for a letter in A 2 . Let W and W 0 be two words of S of the same length. Then,
the corresponding words X and X 0 in U verify jjW since U is balanced. If
we keep only the 0's in X and X 0 , then the corresponding Z and Z 0 words in G satisfy
G is constant gap, and using Lemma 2.9, jjZj a \Gamma jZ 0 j a j 6 1.
We end the proof noting that the construction of Z and Z 0 implies jZj a = jW j a and
Conversely, we have the following theorem.
Theorem 2.17 (Graham). Let u 2 A Z be balanced and non periodic. Then there exists
a partition of A into two sets A 1 and A 2 such that the sequence v de-ned by:
is regular. Furthermore, the sequences z 1 and z 2 constructed from u by keeping only the
letters from A 1 and A 2 respectively have constant gaps.
The proof of this theorem was given by Graham [9] for bracket sequences. An independent
later proof can be found in [11] for balanced sequences. The relation between balanced
and bracket sequences given in Theorem 2.5 makes both proofs more or less equivalent.
2.4 Rates in balanced sequences
also note that a balanced sequence has several asymptotic properties, such as the
following lemma.
Let us formulate precisely the problem which we will study in this section.
Problem 2: Given a set (p possible to construct a balanced
sequence on N letters with rates (p
We will see in the following that this construction is not possible for all the values of
the rates (p makes the construction possible, such a
tuple is said to be balanceable. A similar problem has been addressed in [9, 15, 7], Where
relations between the rates in balanced sequences are studied.
2.5 The case
This case is well known and balanced sequences with two letters have been extensively
studied (see for example [6, 14]). The following result is known even if it is often given
under dioeerent forms.
Theorem 2.18. For all 1, the set of rates
Proof. The proof is similar to the proof of the -rst part of Theorem 2.5. We construct
a sequence S as the support of the function is a balanced
sequence because the interval ]k; k +m] contains exactly
S. This shows the value of e can dioeer
by at most one when k varies so S is a balanced sequence. If S 0 is the complementary set
of S, then it should be clear that S 0 has asymptotic rate
contains
Note that the proof of Theorem 2.18 also gives a construction of a balanced sequence
with the given rates.
2.6 The case
The case essentially dioeerent from the case 2. In the case
possible rates are balanceable while when there is essentially only one set of rates
which is balanceable. This result, when formulated under this form, was partly proved
and conjectured in [13] and proved in [20]. In earlier papers by Morikawa, [16], a similar
result is proved for bracket sequences. If Theorem 2.5 is used, then the result of Morikawa
can be used directly to prove the following theorem. Therefore, we attribute this result to
Morikawa, even if the result was stated dioeerently.
Theorem 2.19 (Morikawa). A set of rates (p only if,
or two rates are equal.
Proof. The proof of Morikawa is very technical since it does not use the balanced property
for bracket sequences. If the balanced property is used, then the proof becomes very easy.
We give a proof slightly simpler than the proof in [20]. First, assume that
let S be a balanced sequence with two letters fa; bg constructed with the rates (p 1
In S, replace alternatively the iais by the letters a 1 ; a 2 , we get a sequence S 0 on alphabet
us show that S 0 is balanced. Since S is balanced, the
number of iajs in an interval of length m is k or k +1, for some k. Now, for S 0 , the number
of ia 1 js (resp. ia 2 js) in such an interval is either is odd and k=2
or is even. This proves that S 0 is balanced.
Now, assume that (p 1 are three dioeerent numbers. We assume that
We will try to construct a sequence W with these respective rates on the alphabet fa; b; cg.
step 1: the sequence iacaj must appear in W .
There exists a pair of consecutive iaj with no ibj in between since This means that
the sequences iaaj or iacaj appear. If iaaj appears, then a icj is necessarily surrounded
by two iajs.
step 2: the sequence ibaabj must appear in W .
There exist a pair of consecutive ibj with no icj in between. This sequence is of the form
"ba n bj. Now, n 6 1 is not possible because of the presence of iacaj and b-regularity.
implies the existence of ia by a-regularity which is incompatible with
iba n bj because of b-regularity. Therefore 2. Note that this also implies the existence
of iaaj and of iabaabaj.
step 3: the sequence iabacabaj appears in W .
the sequence W must contain a icj. This icj is necessarily surrounded by two iajs since
iaaj exists by a-regularity. This group is necessarily surrounded by two ibjs since ibaabj
exists, and consequently, necessarily surrounded by two iajs, since iabaabaj exists. We get
the sequence iabacabaj.
Last step:
letter around this word can be a icj because ibaabj exists. None can be a ibj since
iacaj exists. Therefore, they have to be two iajs. Then note that the following two letters
cannot be a icj (because of the existence of iabaabaj), nor an iaj (because of the existence
of ibacj) so it is a ibj, then followed necessarily by an iaj (because iaaj exists). At this
point, we have the sequence i?abaabacabaaba?j. Both ?s are necessarily icjs.
To end the proof, note that a icj necessarily has the word iabaabaj on its right and
the same word on its left. Finally note that the word iabaabaj is necessarily surrounded
by two icjs. Therefore the sequence W is periodic of the form (abacaba) ! .
2.7 The case
The case very similar to the case rates. However when two
rates are equal, this case is more complicated.
Theorem 2.20. A rate tuple (p distinct rates is balanceable if and
only if (p
Proof. Again, it seems Morikawa has proved a similar result for bracket sequences. How-
ever, using the balanced property, the proof becomes much simpler. We suppose that
we show that there is only one balanced sequence with frequencies
those frequencies are (8=15; 4=15; 2=15; 1=15).
As a preliminary remark, note that if there exists at least one word
that does not contain any a j . This fact will be used several times in the following
arguments.
The proof involves dioeerent steps.
Step 1: W contains the words iacaj or iadaj or iacdaj or iadcaj.
There exists two consecutive iajs with no ibj in between because Therefore,
either iaaj or iacaj or iadaj or iacdaj or iadcaj exist. If iaaj exists, then, a icj is
surrounded by two iajs.
Step 2: W contains the word ibaabj
First, we show that if a word iba n bj exists, then 2. Indeed, the fact that iadaj
or iacaj or iacdaj or iadcaj exist makes ibbj and ibabj impossible. On the other hand, if
the existence of ia necessary by a-regularity and is incompatible with
the existence of iba n bj because of b-regularity.
Now, if no word of the form iba n bj exists, then there exist two consecutive ibjs with
one idj and no icj in between. This word (s 1 ) is of the form: ia i ba j da k ba l j. Note that the
numbers l may be equal to zero but b-regularity.
There also exist two consecutive icjs with no idj in between. This word (s 2 ), is of
the cj. Note that are integers that can dioeer by at most one,
the length of s 1 , js j. This is impossible by
c-regularity.
step 3: the word iabacabaabacabaj exists in W . There exists two consecutive icjs
with no idj in between. From step 3 in the proof of Lemma 2.19, we know that a icj is
necessarily surrounded by the word iabaj. Moreover, from step 4 in the proof of Lemma
2.19, we have: iabacabaabaUcabaj, where U is a word that contains no idj and no icj. U
cannot start with an iaj (because of ibacabj) cannot start with a ibj (because of iacaj)
cannot start with a icj and cannot start with a idj by construction. Therefore U has to
be empty.
Step 4: W is uniquely de-ned and is periodic of period iabacabadabacabaj.
Somewhere, W contains a idj. From this point on, we can extend the word uniquely as:
iabacabadabacabaj around this idj, and the word iabacabaabacabaj has to be surrounded
by two idjs. This ends the proof.
To complete the picture, it is not diOEcult to see that,
Proposition 2.21. if the tuple (p is made of less than two distinct numbers,
then it is balanceable.
Proof. First, if the rates are all equal, they are obviously balanceable. If three of them are
we can construct a balanced sequence with rates (3p 1
and we construct a balanced sequence with rates (p using Proposition 2.16
(where we take G the constant gap sequence (a 1 a 2 a 3
two pairs of
rates are equal, say then we construct a balanced sequence with rates
apply Proposition 2.16.
If the tuple (p is made of exactly three distinct numbers, then this is a more
complex case which is not studied here.
2.8 The general case
In this section, we are interested in the case of arbitrary N . First, note that Proposition
2.21 easily generalizes to any dimension.
Proposition 2.22. If the tuple (p is made of less than two distinct num-
bers, then it is balanceable.
Proof. The proof is similar to the proof of Proposition 2.21.
Proposition 2.23. If (p
balanceable.
Proof. The proof is very similar to that of Proposition 2.16. If W is a balanced sequence
with letters consider the sequence W 0 constructed starting from W and
replacing each ia 1 j by an element of (b 1 j, ib in a cyclic way. Note that W 0 has
the following set of rates, (p 1 =k;
Next, we show that W 0 is balanced. Since W is balanced, for an arbitrary integer m, the
number of ia 1 js in an interval of length m is n or n + 1, for some n. Now, for W 0 , the
number of ib i js in such an interval is either b(n \Gamma 1)=kc or b(n + 1)=kc. This proves that
W 0 is balanced.
For the general case and distinct rates , it is natural to give the following conjecture
(due to Fraenkel for bracket sequences):
Conjecture 2.24. A set of distinct rates fp is balanceable if and only if
We have not been able to prove this fact. Morikawa has also given some insight in this
problem. It is not clear whether it has been completely proven. Here, we only have partial
results given in the following lemmas.
Lemma 2.25. The rates are balance-
able, for all N 2 N .
This lemma is the iifj direction of the conjecture.
Proof. We construct a balanced sequence WN in the following inductive way.
First note that WN has rates
we show that WN is balanced by induc-
tion. In the sequence WN , any letter (say letter appears 2 N \Gammaj times in one period and
is of the form of 2 N \Gammaj \Gamma 1 intervals of the same length (2 j ) and one of length
By construction of WN+1 , this properties still hold and therefore, WN+1 is balanced.
Lemma 2.26. Let W be balanced with rates In
particular, this means that
Proof. If W is not periodic, Theorem 2.17 says that W is composed of two constant gap
sequences. At least one of these sequences has at least two letters, and therefore two letters
have rates which are equal by Lemma 2.12. Therefore, the rates in W of these two letters
are also equal.
Lemma 2.27. Let W be balanced with rates with the following property:
for any 1 6 i 6 N , there exists two consecutive letters ia i j with no a j in between, with
This lemma is a partial ionly ifj result for the conjecture.
Proof. The proof holds by induction. Let V k denote the period of the balanced sequence
with rates (2 2.25. We
recall that according to the construction in the proof of lemma 2.25, We
will prove by induction that W is periodic with period VN .
We -rst prove by induction on k that W contains the word V
that each letter in W , ia j
Now, for the -rst step of the induction note that according to the property on
W , W contains the word ia 1 a 1 j which is the same as iV 1 V 1 j. Therefore any other letter
is surrounded by two ia 1 js. This also implies the existence of ia 1 a 2 a 1 a 1 a 2 a 1 j by using a
similar argument as step 2 in the proof of Theorem 2.20, which ends the case
For the general case, by the induction assumption, a k is surrounded by V k\Gamma2 , and we
have the word iV k\Gamma2 a k V k\Gamma2 j. The existence of iV k\Gamma2 a proves that this
word is surrounded with two ia js. Therefore, two consecutive a k form the word
where U does not contain any letter ia j j, j ? k. If U contains ia then U is
iV k\Gamma2 a j. This is impossible because of the existence of ia
U contains any other letter ia i j, then U is reduced to this letter, and by construction of
the presence of ia contradicts the existence of ia Therefore, U is
empty and we have the second part of the proof.
Now, we -nish the proof by noticing that the letter aN is surrounded by VN \Gamma1 and by
noting that VN \Gamma1 is necessarily surrounded by ia N j.
Lemma 2.28. Assume that W is balanced. Assume that p a ? 0:5. Then the projection
W 0 of W over the alphabet A \Gamma fag is also balanced.
Proof. Choose two words
2 of length n in W 0 . Let V 1 and V 2 be any two words in
W whose projections over the alphabet A \Gamma fag are V 0
furthermore, that the -rst and last letters in V 1 and V 2 are not a. Let
denote the number of appearances of the letter a in V 1 and V 2 , resp.
then the dioeerence in the number of occurrences of any letter b in V 0and in V 0
2 is at most 1, since W is most regular, and since the number of b's in V 1 (resp.
is the same as its number in V 0
Step 2: Assume that l ? k + 1.
2 be the word obtained from V 2 by truncating the -rst and last letter. Then
and the number of a's in -
1 be the word obtained from V 1 by adding to it the next letters that
appears after V 1 in the sequence W . Then j -
and the number of a's
in -
1 is not larger than 2. This is a contradiction with the fact that W is
balanced.
Step 3: It remains to check l 1. Add to V 1 the next letter that occurs in W to its
right, to form the new word V 1 . If it is not a then we have two successive letters that are
not a, which contradicts the fact that a has an asymptotic frequency of at least 1/2. If it
is a, then V 1 and V 2 have the same number of a's. We can now apply the same argument
as in step 1 and conclude that the number number of occurrences of any letter b in V 0
1 and
in V 0
2 is at most 1.
Combining the above steps, we conclude that W 0 is balanced.
2.9 Extensions of the original problem
So far we have only analyzed the case where all the rates add up to one. The dioeerent
results tend to prove that very few rates are balanced.
Now let us look at a generalization when all the rates do not add up to one. Assume
that S is a sequence on the alphabet \Lambdag. We only require that S is balanced
for the letters a but not for the special letter .
On a more practical point of view, the question can be viewed as whether this allows
more possibilities for rates to be balanced when ilossesj are allowed (represented by the
letter ). Then again, in general, the rates are not balanced, even if the total sum is very
small as illustrated by the following lemma.
Lemma 2.29. For an arbitrary " ? 0, there exists two real numbers p 1 and p 2 such that
no sequence S on the alphabet fa; b; \Lambdag with asymptotic rate p 1 for letter a
and p 2 for letter b which is balanced for a and b.
Proof. Choose two irrational numbers p 1 and p 2 with
are not linearly dependent on Z. Now assume that there exists a sequence S on fa; b; \Lambdag
with asymptotic rate p 1 for letter a and p 2 for letter b which is balanced for a and b. By
Theorem 2.5, then there exists two real numbers x; y such that ffi a
otherwise. In the cube [0; , the set of points (x; dense (see for
example, Weyl's ergodic theorem [19]) and therefore hits the rectangle [(1\Gammap
This is not possible.
More on this kind of problems can be found in [21].
To end this short overview on balanced sequences, we must mention on the positive
side that iusualj rates, such as (1=k; 1=k; are often balanceable. In Appendix 4.3,
some examples of balanced sequences and their rates are given.
3 Routing of customers in multiple queues
The notion of balanced sequences gives birth to an large set of elegant properties in word
combinatorics. However, they are rarely used in other domains. The balanced sequences
in dimension used for discrete line drawing [14] and also for scheduling
optimization in [10, 8].
However, the case dioeerently than for higher dimension (as illustrated
by theorem 2.18) and does not really grasp all the complexity of the model.
Here, we present an application of balanced sequences in arbitrary dimensions to
scheduling optimization.
We consider a system where a sequence of tasks have to be executed by several processing
units. The tasks arrive sequentially and each task can be processed by any server. The
routing control consists in assigning to each task a server on which it will be processed.
The routing is optimal it it minimizes some cost function that measures the performance
of the system.
These kinds of models have used to study load balancing within several processors in
parallel processing problems as well as for eOEcient network utilization in telecommunication
systems.
3.1 Presentation of the model
In this section we consider a more precise queueing model of the system that we described.
Customers enter a multiple queue system composed of K nodes. Each node is made of
several queues which form an event graph. Event graphs are a subset of Petri nets with no
more than one input transition and one output transition per place. More details on the
dynamics of event graphs can be found in [4]. In particular, their dynamical behavior is
linear in the so-called (max,+) algebra ([4]). Many queueing networks can be represented
as event graphs, as long as there is no inside choice for the route followed by the customers.
(see [1] for a more complete discussion on this issue).
The routing of customers to the dioeerent nodes is controled by a sequence of vectors
g, with a n is in f0; 1g K and a i
means that the n-th customer is routed to queue i.
Note that a is a feasible admission sequence as long as for all n,
The link between a feasible routing policy and an in-nite sequence on a -nite alphabet
comes from choosing the alphabet A composed by the letters
Using this alphabet on K letters, a feasible routing policy can be viewed as an in-nite
sequence on A.
Figure
1 shows an illustration of the system we are considering.
We denote by T n the epoch when the n-th customer enters the system. We assume that
The inter-arrival time sequence is fffi
g. Finally, oe i;j
will denote the
service time of the n-th customer entering the j-th queue in node i.
The sequences fffi k g and foe i;j
will be considered as random processes. We also make
stochastic assumption on these sequences. The inter-arrival time of the customers and the
service times form stationary processes, and we assume that the inter-arrival times are
independent of the service times.
3.2 Optimal admission sequence
In each node i we pick an arbitrary server s i (which may be the last server in the node
for example). The performance criteria for node i will be the traveling time to server s i
node 2
node 1
node K
Figure
1: Illustration of the routing of customers in a K node system
of a virtual customer that would enter node i at time T n . Under the routing policy a,
this quantity only depends on the values of the n -rst routing choices. From the routing
sequence a, we can isolate the routing decision for node i: if a i
then the customer is
admitted in node i and if a i
then the customer is rejected (for node i). We denote
the traveling time at time T n by W i
We will be more particularly interested
in the expected value of the traveling time with respect to the service times in all the
servers contained in node i and with respect to the inter-arrival times: W i
convex increasing function.
If we focus on a single node i, the function W i
n ) has been studied in [1]. Its
most remarkable property is the fact that this function is multimodular. See [10, 2] for a
precise de-nition and several properties of multimodular functions. Here, we merely point
out that multimodularity is closely to convexity (see [2]).
Proposition 3.1 ([1]). Under the foregoing assumptions, the function W i
satis-es the following properties.
increasing in a i
Proof. These properties are shown in [1].
Using these properties, one can derive as in [2], a lower bound B i (ff; p) (which is increasing
in ff and p and continuous) for any routing a i , for the following discounted cost.X
k .
Also, for a given p, we de-ne the regular sequence with rate p and arbitrary phase ',
a
(see the de-nition 2.4). One can show as in [2] that a p (') satis-es
lim
m!1m
Here, however, we are interested in the performance of all nodes together. Therefore,
we choose as a cost function, the undiscounted average on n of some linear combination of
the expected traveling time in all nodes.
Let h be any increasing linear We consider the undiscounted
average cost of a feasible routing sequence a,
From this point on, we mimic the general method developed in [2] for our case.
We use the following notation.
Our objective is to minimize g(a).
Theorem 3.2. The following lower bound holds for all policies:
Proof. Due to Littlewood's and Jensen's inequalities as well as Equation 5, we have
lim
ff
ff
ff
where
a
ff
We note that
1. Hence, one may choose a sequence ff such that the
following limits exist:
lim
a
and
1. From the continuity of B i (ff; and ff we get from (8)
Note that there exists some p that achieves the in-mum
since h(B 1 (p 1 ); :::; BK (p K )) is continuous in
Consider the routing policy a p
(') given for each i by
a i;p
There are some p 's for which the condition of feasibility of the policy a p
(') is satis-ed,
that is, there exists some that the regular policy a p
(') is feasible.
Using the correspondence between a routing policy and a sequence on the alphabet A,
these p 's correspond precisely to balanceable rates.
Theorem 3.3. Assume that p is balanceable. Then a p
(') is optimal for the average cost,
i.e. it minimizes g(a) over all feasible policies.
Proof. The proof follows directly from Theorem 3.2 and Equation (6).
4 Study of some special cases
The problem which remains to be addressed is to -nd in which cases, the rate vector p is
balanceable. We will present several simple examples for which we can make sure that the
optimal rate p is balanceable.
4.1 The case
If the optimal rate vector is of the form p
Theorem 2.18
says that p is always balanceable and therefore, the optimal routing sequence is given by
an associated balanced sequence. Note that this approach does not give any direct way to
compute the value of p , however, it gives the structure of the optimal policy.
Figure
2: Routing in homogeneous queues.
4.2 The homogeneous case
Now let K be arbitrary and each node is made of a single server, all servers being identical.
This model is displayed in Figure 2.
Also assume that the function h is symmetrical in all coordinates (for example, just
the sum of all waiting times) By symmetry and convexity in (p of the function
K (p K )), which is balanceable. The associated
balanced sequence is the round robin routing scheme. Applying Theorem 3.3 yields the
following result which is new (to the best of the author's knowledge).
Theorem 4.1. The round robin routing to K identical ./G/1 queues, minimizes the total
average expected workload of all the queues over all admission sequences with no information
on the state of the system.
In [12], the round robin routing is proved to be optimal in separable-convex increasing
order for K identical ./GI/1 queues. Their method uses an intricate coupling argument,
whereas our proof is a simple corollary of the general theory on multimodular functions.
To illustrate the advantage of our approach, we further generalize the result to a system
composed of K identical (max,+) linear systems with a single entry. In this case, the
symmetry argument used in the case of simple queues still holds. Then again, the round
robin routing policy minimizes the traveling time in each system. This case includes models
such as routing among several identical systems composed of queues in tandem, for example
(see
Figure
3).
4.3 Two sets of identical servers
As a consequence of the two previous cases, we can consider a system composed of K 1
identical queues of type 1 and K 2 queues of type 2. Again, assume that h is symmetrical in
the K 1 nodes of type 1 and symmetrical in the K \Gamma 2 nodes of type 2. Then, by symmetry
arguments, the optimal rate vector p is of the form
Figure
3: Routing in queues in tandem
This rate vector is balanceable indeed. This implies that for the weighted total average
expected workload, the optimal routing is of balanced type, if nodes of the same type have
the same weight.
Many other examples of this kind can be derived from these examples through similar
constructions.
Appendix
Here is a collection of balanceable rates. We also give a corresponding balanced sequence.
ffl (1=7; 2=7; 4=7) is balanceable and
ffl (1=11; 2=11; 4=11; 4=11) is balanceable and
ffl (1=11; 2=11; 2=11; 6=11) is balanceable and
ffl (1=11; 1=11; 3=11; 6=11) is balanceable and
ffl (1=14; 1=14; 4=14; 8=14) is balanceable and
ffl For all real number are balanceable, with
a corresponding balanced sequence constructed from a regular sequence with rate p
where all 1 are replaced in turn by the sequence (abac) ! and each 0 by the letter d.
ffl for all N , balanceable. The
associated balanced sequence is constructed recursively as in Lemma 2.25.
balanced sequence is:
balanced sequence with those rates is constructed
in the following way: Choose a balanced sequence S on letters, (A; B) with
rate (
fi). In S replace all the A (resp. B) by a 1 ; a
in a round robin fashion to get a balanced sequence with the required
rates.
Acknowledgment
The authors would like to thank Alain Jean-Marie who pointed out the reference [23] and
gave them a -rst proof of Lemma 2.12.
--R
Admission control in stochastic event graphs.
Multimodularity, convexity and optimization properties.
Complexity of sequences de-ned by billiards in the cube
Complexity of trajectories in rectangular billiards.
Journal of Combinatorial Theory
Optimal allocation sequences of two processes sharing a resource.
Covering the positive integers by disjoint sets of the form f
Extremal splittings of point processes.
Optimal load balancing on distrinuted homogeneous unreliable processors.
Markov Desicion Chains With Partial Information.
Mots, chapter Trac
Disjoint sequences generated by the bracket function i-vi
On eventually covering families generated by the bracket function i-v
Symbolic dynamics II- Sturmian trajectories
Roots of unity and covering sets.
Introduction to Ergodic Theory.
On complementary triples of sturmian sequences.
On disjoint pairs of sturmian bisequences.
Combinatoire des motifs d'une suite sturmienne bidimensionnelle.
Academic Press
--TR
Generating functionology
Optimization of static traffic allocation policies
Combinatorial properties of sequences defined by the billiard in paved triangles
On the pathwise optimal Bernoulli routing policy for homogeneous parallel servers
Combinatorics of patterns of a bidimensional Sturmian sequence.
Minimizing service and operation costs of periodic scheduling
Optimal Allocation Sequences of Two Processes Sharing a Resource
Optimal Load Balancing on Distributed Homogeneous Unreliable Processors
--CTR
N. Brauner , Y. Crama, The maximum deviation just-in-time scheduling problem, Discrete Applied Mathematics, v.134 n.1-3, p.25-50, 05 January 2004
Berth , Robert Tijdeman, Balance properties of multi-dimensional words, Theoretical Computer Science, v.273 n.1-2, p.197-224, February 2002
Berth , Robert Tijdeman, Lattices and multi-dimensional words, Theoretical Computer Science, v.319 n.1-3, p.177-202, June 10, 2004
Bruno Gaujal , Emmanuel Hyon , Alain Jean-Marie, Optimal Routing in Two Parallel Queues with Exponential Service Times, Discrete Event Dynamic Systems, v.16 n.1, p.71-107, January 2006
Shinya Sano , Naoto Miyoshi , Ryohei Kataoka, m-Balanced words: a generalization of balanced words, Theoretical Computer Science, v.314 n.1, p.97-120, 25 February 2004
Arie Hordijk , Dinard Van Der Laan, NOTE ON THE CONVEXITY OF THE STATIONARY WAITING TIME AS A FUNCTION OF THE DENSITY, Probability in the Engineering and Informational Sciences, v.17 n.4, p.503-508, October
Arie Hordijk, COMPARISON OF QUEUES WITH DIFFERENT DISCRETE-TIME ARRIVAL PROCESSES, Probability in the Engineering and Informational Sciences, v.15 n.1, p.1-14, January 2001
Eitan Altman , Bruno Gaujal , Arie Hordijk, Optimal Open-Loop Control of Vacations, Polling and Service Assignment, Queueing Systems: Theory and Applications, v.36 n.4, p.303-325, December 2000
Boris Adamczewski, Balances for fixed points of primitive substitutions, Theoretical Computer Science, v.307 n.1, p.47-75, 26 September
Raphael Rom , Moshe Sidi , Hwee Pink Tan, Design and analysis of a class-aware recursive loop scheduler for class-based scheduling, Performance Evaluation, v.63 n.9, p.839-863, October 2006 | optimal control;stochastic event graphs;multimodularity;balanced sequences |
347550 | New Methods for Estimating the Distance to Uncontrollability. | Controllability is a fundamental concept in control theory. Given a linear control system, we present new algorithms for estimating its distance to uncontrollability, i.e., the norm of the normwise smallest perturbation that makes the given system uncontrollable. Many algorithms have been previously proposed to estimate this distance. Our new algorithms are the first that correctly estimate this distance at a cost polynomial in a dimension of the given system. We report results from some numerical experiments that demonstrate the reliability and effectiveness of these new algorithms. | Introduction
One of the most fundamental concepts in control theory is that of controllability. A matrix pair
n\Thetan \Theta C n\Thetam is controllable (see Kailath [20, pages 85-90]) if the state function
in the linear control system
can be directed from any given state to a desired state in finite time by an input
could signal fundamental trouble with the control model or the underlying physical
system itself (Byers [11]).
A large number of algebraic and dynamic characterizations of controllability have been given
(Laub [21], for example). But each and every one of these has difficulties when implemented in
finite precision (Patel, Laub, and Van Dooren [27, page 15]). For instance, it is well known that
(A; B) is controllable if and only if
where C is the set of complex numbers. However, it is not clear how to numerically verify whether a
system is controllable through (1.2). More critically, equation (1.2) does not provide any means to
detect systems that are "nearly" uncontrollable, systems that could be equally troublesome. From
these considerations, it became apparent (see Laub [21] and Paige [26]) that a more meaningful
Department of Mathematics, University of California, Los Angeles, CA 90095-1555. This research was supported
in part by NSF Career Award CCR-9702866 and by Applied Mathematical Sciences Subprogram of the Office of
Energy Research, U.S. Department of Energy under Contract DE-AC03-76SF00098.
measure is the distance to uncontrollability, the norm distance of the pair (A; B) from the set of all
uncontrollable pairs:
It was later shown by Eising [15, 16] that
where oe n (G) denotes the n-th singular value of G 2 C n\Theta(n+m) . Demmel [12] relates ae(A; B) to the
sensitivity of the pole-assignment problem.
Many algorithms have been designed to compute ae(A; B). However, the function to be minimized
in (1.4) is not convex and may have as many as n or more local minima. It is not clear
just how many local minima there are for any given problem (Byers [11]). Methods that search
for a local minimum tend to be efficient but have no guarantee of finding ae(A; B) with any accu-
racy, since ae(A; B) is the global minimum (Boley [4, 6], Boley and Golub [5], Boley and Lu [7],
Byers [11], Elsner and He [17], Miminis [24], and Wicks and DeCarlo [31]); and methods that
search for the global minimum (Byers [11], Gao and Neumann [18], and He [19]) sometimes do have
this guarantee, but require computing time that is inverse proportional to ae 2 (A; B), prohibitively
expensive for nearly uncontrollable systems, the kind of systems for which computing ae(A; B) is
important. While the backward stable algorithms of Beelen and Van Dooren [2, 3, 30] and Demmel
and K-agstr-om [13, 14] are efficient and very useful for detecting uncontrollability, they often fail to
detect near-uncontrollability.
In this paper, we propose new methods to correctly estimate ae(A; B) to within a factor of 2.
They are based on the following bisection method:
Algorithm 1.1 Bisection Method.
while
endwhile
The bisection idea was used to compute the distance of a stable matrix to the unstable matrices
(Byers [10]). It was then used to compute the L1 norm of a transfer matrix (Boyd, Balakrishnan
and Kabamba [9]); a quadratically convergent version of this later method was developed by Boyd
and Balakrishnan [8].
There were past attempts to use Algorithm 1.1 to estimate ae(A; B) as well [11, 18]; but they
have resulted in potentially prohibitively expensive algorithms. The critical difference between our
new approach and earlier attempts lies in how to numerically verify whether ffi - ae(A; B). Our new
approach is based on a novel verifying scheme (see Section 3.2). Paralleling the development of
Boyd and Balakrishnan [8], we have also developed a generally quadratically convergent version of
Algorithm 1.1. With very little modification, our new methods can be used to detect the uncontrollable
modes for any given tolerance. The knowledge of such modes is essential if one wishes to
remove them from the system.
Complexity-wise, these new algorithms differ from previous algorithms in that they are the first
algorithms that correctly estimate the distance at a cost polynomial in the matrix size. In fact,
they require O(n 6 ) floating pointing operations. The main cost of these new algorithms is the
computation of some eigenvalues of certain sparse generalized eigenvalue problems of size O(n 2 ).
In x2 we review methods of Byers and Gao and Neumann to minimize the function in (1.4)
when - is restricted to a straight line on the complex plane. In x3 we present our new methods
to minimize the function in (1.4) over the entire complex plane. In x4 we present some numerical
results. And In x5 we draw conclusions and discuss open questions.
Minimization Methods over a Straight Line
ih
where - is a real variable. To motivate our new methods to minimize the function in (1.4) over
the entire complex plane, in this section we present Algorithm 2.1 below to estimate the global
minimum of g(-) to within a factor of 2 for a given complex number - 0 and a real angle '. This
algorithm is a variation of the bisection schemes of [11, 18], which can actually compute the global
minimum. Let - be a global minimizer.
Algorithm 2.1 Bisection Method over a Straight Line
while
endwhile
We will also discuss a quadratically convergent version of Algorithm 2.1 in x2.2.
2.1 The Bisection Method over a Straight Line
Let g(- What is missing in Algorithm 2.1 is a scheme to numerically verify whether
We discuss such a scheme below. Different versions of it were developed
in Byers [11] and Gao and Neumann [18], and were based on earlier work of Byers [10].
We assume that ffi - g(- ). Since g(-) is a continuous function with
lim
it follows that there exist at least two solutions 1 to the equation By the definition of
singular values, this implies that there exist non-zero vectors
x
y
and z such that
I B
x
y
A \Gamma
I
x
y
has a double root at - .
These equations can be rewritten asB B @
I B
A \Gamma
I \Gammaffi I 0
z
x
To simplify (2.2), we QR-factorize
\Gammaffi I
!/
and define /
z
y
These relations and equation (2.2) imply that R z must be non-singular. It
follows that z Hence equation (2.2) is reduced to@ A \Gamma
I
\Gammaffi I
A \Gamma
x
which can be further rewritten as
\Gammaffi I
A
!/
x
e i' I 0
!/
x
Since Q 12 is part of the Q factor in the QR factorization (2.3), it follows that
22
which imply
Hence Q 12 is non-singular and the pencil in (2.4) is regular. It is now easy to show that condition
only if the matrix pencil in (2.4) has a real eigenvalue - .
To verify whether ffi - g(- ) in Algorithm 2.1, we compute the eigenvalues of the pencil in (2.4).
If this pencil has real eigenvalues, then
guarantees that 2ffi - g(- ) from the previous bisection step, the value of ffi after Algorithm 2.1 exits
from the while loop must satisfy
We note that equation (2.2) was not reduced in [11], making it more time consuming to verify
whether It was reduced to a regular eigenvalue problem by solving for y in [18], but the
reduction appears to be less numerically reliable than our reduction to (2.4).
2.2 A Quadratically Convergent Variation
Boyd and Balakrishnan [8] note that in the context of computing the L1 -norm of a transfer matrix,
the function to be minimized is in general approximately quadratic near the maximum, and they
used this fact to design a quadratically convergent variation of a bisection method for computing
the L1 -norm.
Their idea applies equally well in minimizing (2.1). If enough and if - 1 and - 2
are two roots of closest to - , then arguments similar to those of [8] show that (- 1
is in general a much better approximation to - . We summarize this algorithm below.
Algorithm 2.2 Quadratically Convergent Variation of Algorithm 2.1
while
Choose two real eigenvalues - 1 and - 2 of the pencil (2.4).
ae
/2 .
endwhile
With arguments similar to those of [8], it is easy to show that if - 1 and - 2 are chosen correctly, then
This estimate holds even if g(-) is not approximately quadratic near - . We caution that strictly
speaking Algorithm 2.2 is not even asymptotically quadratically convergent, since it terminates
as soon as it has found a ffi that satisfies (2.5). Nevertheless, relation (2.6) does indicate rapid
convergence of Algorithm 2.2 when g(- ) is tiny.
3 Minimization Methods over the Complex Plane
Now we discuss methods to minimize the function in (1.4) over the entire complex plane. Let
One such method is Algorithm 1.1 discussed in x1. As in Algorithm 2.1, we need to develop a
scheme to verify whether ffi ? ae(A; B) in order to complete Algorithm 1.1. To do so, we first prove
a fundamental theorem in x3.1; we then provide such a scheme in x3.2; and finally we develop a
generally quadratically convergent version of Algorithm 1.1 in x3.3.
3.1 A Fundamental Theorem
Our scheme to verify whether ffi ? ae(A; B) is based on Theorem 3.1 below.
Theorem 3.1 Assume that ffi ? ae(A; B). Then there are at least two pairs of real numbers ff and
fi such that
denotes a singular value of G.
Proof: From standard perturbation theory we have
Hence f(ff; fi) goes to infinity if jff does. It is well-known that f(ff; fi) is a continuous function
of ff and fi. Consequently the fact that ffi ? ae(A; B) immediately implies that there exists a pair of
From the definition of singular values, ff and fi satisfy
if and only if they satisfy the algebraic equation
det
It follows from f(ff 1 that this algebraic equation has at least one solution; and it follows
from (3.3) that all its solutions are finite. Consequently, these solutions form a finite number of
closed (continuous) algebraic curves on the ff-fi plane.
Now we claim that the point (ff ; fi ) must be in the interior of one of these closed curves. In
fact, if this is not the case, then there exists a continuous curve - 1 (- 2 (-)) on the ff-fi
plane that does not intersect with any of these algebraic curves but "connects" (ff ; fi ) and infinity:
In other words,
It follows from the continuity argument that there exists a - 1 such that f (- 1 (- 1
this contradicts the assumption that the curve -) does not intersect with any of the algebraic
curves. Consequently, the point (ff ; fi ) must be in the interior of one of these closed curves. Among
all closed curves that have (ff ; fi ) in their interior, let G denote the one that covers the smallest
area.
It follows from the same continuity argument that there exist two points
on G with In other words,
This fact has been shown in Byers [11] and Gao and Neumann [18].
For simplicity, we assume that P 1 and P 2 are chosen so that j 1 and j 2 are the smallest positive
numbers.
Since the point (ff ; fi ) is in the interior of G and also lies strictly inside the line segment between
that any point that lies strictly inside this line segment is in the interior of
curve G. Combining (3.4) with relation (3.3), we get
Now we shift all the points on G horizontally by the same amount \Gammaj to get a closed curve
is a point on G.g
Since P 2 is a point on G, b
a point on b
G. Assume that
Then relation (3.5) implies that b
is a point that lies strictly inside the line segment between P 1
and P 2 . Hence b
is in the interior of curve G. Let P be the leftmost point on G. Then
is the leftmost point on b
G. we have that b
P 3 is in the exterior of G.
In other words, we have found a point b
that is on b
G and in the interior of G, and another
point b
P 3 on b
G that is in the exterior of G. Since G and b
G are continuous closed curves, we conclude
that these two curves intersect.
intersecting point. It follows that both (ff 4 must be
points on G. Hence ff 4 and fi 4 are a solution to (3.2). Therefore equations (3.2) have at least one
solution.
In the following argument we assume that (ff 4 ; fi 4 ) is the only intersecting point of G and b
G. Let
G 1 denote the set of points on b
G that are either on G or in the interior of G and let G 1 denote the
corresponding set of points on G. If b
G 1 is not a closed curve itself, then b
must be an open curve
with one end point on G and the other in the interior of G. It follows that b
must
have positive arclength. Hence the portion of G without G 1 is a closed curve. But this contradicts
the way G is constructed. This contradiction implies that b
must be closed curves
themselves. Let b
denote the set of points on b
G that are either on G or in the exterior of G and
let G 2 denote the corresponding set of points on G. A similar argument shows that G 2 must be a
closed curve as well.
By construction, b
do not share any common region with positive
area. Hence (ff ; fi ) can only be in the interior of one of these closed curves, this implies that G
is not a closed curve that has (ff ; fi ) in its interior and covers the smallest area, a contradiction
to the way G was constructed. This contradiction is the result of the assumption that G and b
G
intersect only once. Hence G and b
G must intersect at least twice, so equations (3.2) must have at
least two real solutions. By a continuity argument, equations (3.2) have two, possibly identical, real
solutions, even if
3.2 A New Verifying Scheme
In the following we consider how to numerically verify whether equations (3.2)
have a real solution. By the definition of singular values, equations (3.2) imply that there exist
non-zero vectors
x
y
x
y
, and b
z such that
x
y
x
y
x
x
y
These equations can be rewritten asB @
A \Gamma ffI \Gammaffi I 0
z
x
z
x
and 0
z
x
z
x
In the QR factorization (2.3), define
z
y
and
b z
These relations and equations (3.6) and (3.7) imply that R z
non-singular, it follows that z Hence equations (3.6) and (3.7) are reduced to
\Gammaffi I
!/
x
I 0
!/
x
and /
\Gammaffi I
!/
x
I 0
!/
x
As shown in x2.1, Q 12 is always non-singular for Hence the matrix on the right hand sides of
both (3.8) and (3.9) is non-singular. In order for the two pencils defined in (3.8) and (3.9) to share
a common pure imaginary eigenvalue fii, the following matrix equation for X 2 R 2n\Theta2n
\Gammaffi I
I 0
I 0
\Gammaffi I
must have a non-zero solution. Partition
, this matrix equation becomes
12\Omega I 0
vec (X 22 )
vec (X 21 )C C C A ;
12\Omega
)\Omega I 0
I\Omega
\Omega I
12\Omega
12\Omega I \Gammaffi
In these
equations,\Omega is the Kronecker product and vec(G) is a vector formed by stacking the
column vectors of G.
To reduce (3.11) to a standard generalized eigenvalue problem, let
(R
be the RQ factorization of H,
; and define
Then the first equation in (3.11) reduces to setting the second equation in (3.11)
becomes
where
\GammaQ
12\Omega I 0I\Omega Q 12
Equation (3.13) is now a 2n 2 -by-2n 2 generalized eigenvalue problem. Hence we have reduced the
problem of finding a non-zero solution to (3.10) to the generalized eigenvalue problem (3.13).
To summarize, we have shown that in order for (3.2) to have at least one real solution (ff; fi),
both matrix pencils in (3.8) and (3.9) must share a common pure imaginary eigenvalue fii. This
requires that the matrix equation (3.10) must have a non-zero solution, which, in turn, is equivalent
to requiring that the generalized eigenvalue problem (3.13) have a real eigenvalue ff.
In order to verify whether ffi ? ae(A; B) in any bisection step of Algorithm 1.1, we set
in (3.2) and check whether the generalized eigenvalue problem (3.13) has any real eigenvalues ff.
If it does, we then check for each real ff whether the two matrix pencils in (3.8) and (3.9) share a
common pure imaginary eigenvalue fii. If they do for at least one ff, then we have found a pair of
ff and fi such that
On the other hand, if (3.13) does not have a real eigenvalue, or if the matrix pencils in (3.8) and (3.9)
do not share a common pure imaginary eigenvalue for any real eigenvalue of (3.13), then we conclude
by Theorem 3.1 that
On the other hand, Algorithm 1.1 guarantees that 2ffi - ae(A; B) from the previous bisection step.
Thus the value of ffi after Algorithm 1.1 exits from the while loop must satisfy
ae(A; B)
3.3 A Generally Quadratically Convergent Variation
Algorithm 1.1 converges linearly. Following Boyd and Balakrishnan [8] (see x2.2), we develop
a generally quadratically convergent version of Algorithm 1.1 in this section. In the following
development, we assume that f(ff; fi) is analytic in both ff and fi in a small neighborhood of
(ff ; fi ) so that f(ff; fi) permits the following expansions
@
@ff
where fl, -, and - are the relevent second-order partial derivatives at (ff ; fi ). We further assume
that the matrix \Gamma j
is positive definite. These expansions with a positive definite \Gamma
imply that (ff ; fi ) is at least a local minimum.
Now assume that ff and fi are such that f(ff It follows from (3.15) that
@
@ff
Expanding the partial derivative at (ff ; fi ) to get
where we have used the fact that @
@ff
be the two solutions to (3.2)
that are near (ff ; fi ) (see Theorem 3.1). It follows that ff i and fi i satisfy both (3.17) and (3.16)
2. Now let i 1;i and i 2;i denote the error terms in (3.17) and (3.16) for (ff
Consequently,
It follows from the first equation that
Plugging this into the second equation and simplifying:
where
O
(j
On the other hand, since \Gamma is assumed to be positive definite, equation (3.16) implies that
O(-). Furthermore, the choice of Algorithm 1.1 ensures that
The result can be rewritten as 3
+O(-) and (fi
Combining these equations,
Plugging this relation into (3.18) and combining the equations for
2 and fi new
It follows that
and that
f (ff new ; fi new
We note that this relation is very similar to (2.6). Now we modify Algorithm 1.1 to getThese relations hold as long as flj 2
=4 -. It is likely that under certain conditions, Theorem 3.1 holds for much
larger values of j as well.
Algorithm 3.1 Quadratically Convergent Variation of Algorithm 2.1
while
Choose two real solutions (ff
ae
/2 .
endwhile
In our implementation, we computed f
among adjacent pairs of real solutions
and chose the pair with smallest f value. We note that in both Algorithms 1.1 and 3.1, we
can compute a better initial guess ffi by using Algorithm 2.2 with some values of - 0 and ', such as
Algorithm 3.1 was derived under the assumptions at the beginning of x3.3, which need not
hold for all linear control systems of the form (1.1). Hence estimate (3.19) may not hold for
some linear control systems. However, it is clear that Algorithm 3.1 converges at least linearly to
arrive at an estimate ffi that satisfies (3.14). Similar to Algorithm 2.2, Algorithm 3.1 is not strictly
speaking quadratically convergent since it terminates as soon as it has found a ffi that satisfies (3.14).
Nevertheless, Algorithm 3.1 does converge much more rapidly than Algorithm 1.1 when ae(A; B) is
tiny. We discuss this point further in x4.
3.4 Further Considerations
Sometimes it may be more important to find the uncontrollable modes of (1.1) for a given tolerance
". In this case, we solve equations (3.2) with ". If there are no solutions to (3.2), then the
system (1.1) is controllable; otherwise, each solution to (3.2) corresponds to an uncontrollable mode.
Conversely, it is easy to see from the proof of Theorem 3.1 that any uncontrollable mode will result
in at least two solutions to (3.2). Hence the set of all solutions to (3.2) provide approximations
to the uncontrollable modes of (3.2). The formulas for ff new and fi new provide more accurate
approximations to these modes.
If ae(A; B) is very small, then small during the execution of Algorithms
1.1 and 3.1. In fact, for small enough j, the two different points ff + fii and ff
look identical. Hence the solutions to (3.2) are potentially ill-conditioned. See x4 for more details.
Like many other algorithms in engineering computations, such as those for semi-definite programming
[1, 25, 29], both Algorithms 1.1 and 3.1 are expensive for large problems, since both the
reduction to and the solution of the pencil (3.13) require O(n 6 ) floating point operations. However,
the eigenvalue problem (3.11) is highly sparse as a 4n 2 \Theta 4n 2 problem. It is likely that sparse matrix
computation technologies, such as the implicitly restarted Arnoldi iteration [22, 23, 28], can be used
to compute the real eigenvalues of (3.13) quickly. The effectiveness of this approach is currently
under thorough investigation.
4 Numerical Experiments
We have done some elementary numerical experiments with Algorithms 1.1 and 3.1. In this section
we report some of the results obtained from these experiments. The experiments were done in
matlab in double precision.
Matrices in Examples 2 through 5 were taken from Gao and Neumann [18]. These are systems
with small ae(A; B). Global optimization methods (Byers [11], Gao and Neumann [18], and He [19])
could require prohibitively expensive computation time to correctly estimate ae(A; B) in these cases.
On the other hand, both Algorithms 1.1 and 3.1 worked well on them, with Algorithm 3.1 converging
much faster than Algorithm 1.1 as expected.
Example 1. In this example we took A 2 R 5\Theta5 and B 2 R 5 to be random matrices. This is a
matrix pair with fairly large ae(A; B). Both Algorithms 1.1 and 3.1 took 2 iterations to terminate.
This example illustrates that for linear systems (1.1) that are far away from the set of uncontrollable
systems, both Algorithms 1.1 and 3.1 take very few iterations.
Example 2. In this example we took
This pair is uncontrollable since the smallest singular value of [A \Gamma (1 \Sigma 2i) I B] is zero. Algorithms
1.1 and 3.1 took 42 and 5 iterations, respectively, to find ae(A;
Example 3. In this example we took
\Gamma0:32616458 \Gamma0:09430266 0:05207847 \Gamma:08481401 0:05829280
0:01158922 \Gamma:39787419 \Gamma:14901699 \Gamma:01394125 \Gamma:10626942
0:05623810 \Gamma:03153954 \Gamma:50160557 \Gamma:05748511 \Gamma:00552321
iterations to return returned six distinct
solutions to (3.2): (ff;
and
On the other hand, Algorithm 3.1 took 4 iterations to return
returned one distinct solution (ff;
Example 4. In this example we took
\Gamma:22907968 0:08886286 \Gamma:18085425 \Gamma:03469234 \Gamma:32819211
\Gamma:02507663 :30736050 \Gamma:24819024 :21852948 \Gamma:06260819
iterations
to return returned four distinct
solutions
On the other hand, Algorithm 3.1 took 3 iterations to return
returned two distinct solutions (ff;
Example 5. In this example we took
\Gamma:27422658 \Gamma:21968089 \Gamma:21065336 \Gamma:22134064 0:19235875
\Gamma:07210867 :18848014 \Gamma:29068998 :28936270 0:10007703
\Gamma:03547166 :17931676 :14590007 :00556579 :38838791
\Gamma:07780546 \Gamma:29477373 :01366200 :32749991 \Gamma:0131683C C C C C A
iterations
to return returned 14 distinct solutions
to (3.2). On the other hand, Algorithm 3.1 took 2 iterations to return
for returned one distinct solution (ff;
Example 6. In this example we took the matrix pair (A; B) from Example 2, and set
where Q is a random orthogonal matrix. This new matrix pair is still uncontrollable. But Algorithm
1.1 took 28 iterations to return iterations
to return This example illustrates that both algorithms can have numerical
difficulties in correctly estimating ae(A; B) if it is very tiny.
5 Conclusions and Extensions
In this paper, we have presented the first algorithms that require a cost polynomial in the matrix
size to correctly estimate the controllability distance ae(A; B) for a given linear control system. And
we have demonstrated their effectiveness and reliability through some numerical experiments.
The biggest open question is how to further reduce the cost. At the core of these algorithms
is the computation of all real eigenvalues of a sparse 4n 2 \Theta 4n 2 eigenvalue problem. Currently, we
find these eigenvalues by treating the eigenvalue problem as a dense one, resulting in algorithms
that are too expensive for large problems. In the future, we plan to exploit the possibility of finding
these real eigenvalues via sparse matrix computation technologies, such as the implicitly restarted
Arnoldi iteration [22, 23, 28], to significantly reduce the computation cost;
Another open question is to better understand the effects of finite precision arithmetic on the
estimated distance ae(A; B). As we observed in x4, if ae(A; B) is very tiny, then the distance estimated
by the new algorithms in finite precision could be much larger than the exact distance.
Finally, the perturbation [\DeltaA; \DeltaB] in (1.3) can be complex even if both A and B are real. It is
known (Byers [11]) that the norm-wise smallest real perturbation can be much larger than ae(A; B).
Whether our new algorithms shed new light on the computation of the norm-wise smallest real
perturbation remains to be seen.
--R
An improved algorithm for the computation of Kronecker's canonical form of a singular pencil.
A class of staircase algorithms for generalized state space systems.
Computing the controllability/observability decomposition of a linear time-invariant dynamic system: a numerical approach
The Lanczos-Arnoldi algorithm and controllability
Computing rank-deficiency of rectangular matrix pencils
Measuring how far a controllable system is from uncontrollable one.
A regularity result for the singular values of a transfer matrix and a quadratically convergent algorithm for computing its L1
A bisection method for computing the H1 norm of a transfer matrix and related problems.
A bisection method for measuring the distance of a stable matrix to the unstable matrices.
Detecting nearly uncontrollable pairs.
On condition numbers and the distance to the nearest ill-posed problem
The distance between a system and the set of uncontrollable systems.
Between controllable and uncontrollable.
An algorithm for computing the distance to uncontrollability.
A global minimum search algorithm for estimating the distance to uncontrollability.
Estimating the distance to uncontrollability: A fast method and a slow one.
Linear Systems.
Survey of computational methods in control theory.
Analysis and Implementation of an Implicitly Restarted Arnoldi Iteration.
Deflation techniques for an implicitly restarted Arnoldi iteration.
Numerical algorithms for controllability and eigenvalue location
Properties of numerical algorithms related to computing controllability.
Numerical Linear Algebra Techniques For Systems and Control.
Implicit application of polynomial filters in a k-step Arnoldi method
The computation of Kronecker's canonical form of a singular pencil.
Computing the distance to an uncontrollable system.
--TR | QR factorization;controllability;complexity;numerical stability |
347551 | On the propagation of long-range dependence in the Internet. | This paper analyzes how TCP congestion control can propagate self-similarity between distant areas of the Internet. This property of TCP is due to its congestion control algorithm, which adapts to self-similar fluctuations on several timescales. The mechanisms and limitations of this propagation are investigated, and it is demonstrated that if a TCP connection shares a bottleneck link with a self-similar background traffic flow, it propagates the correlation structure of the background traffic flow above a characteristic timescale. The cut-off timescale depends on the end-to-end path properties, e.g., round-trip time and average window size. It is also demonstrated that even short TCP connections can propagate long-range correlations effectively. Our analysis reveals that if congestion periods in a connection's hops are long-range dependent, then the end-user perceived end-to-end traffic is also long-range dependent and it is characterized by the largest Hurst exponent. Furthermore, it is shown that self-similarity of one TCP stream can be passed on to other TCP streams that it is multiplexed with. These mechanisms complement the widespread scaling phenomena reported in a number of recent papers. Our arguments are supported with a combination of analytic techniques, simulations and statistical analyses of real Internet traffic measurements. | INTRODUCTION
Statistical self-similarity and long-range dependence are important
topics of recent research studies. Both phenomena
are related to certain scale-independent statistical proper-
ties. Statistical self-similarity can be detected when trafc
rate
uctuates on several timescales and its distribution
scales with the level of aggregation. Long-range dependence
means that the correlation decays slower than in traditional
tra-c models (e.g., Markovian), i.e., it decays hyperboli-
cally. A number of authors have argued that self-similarity
in data networks can be induced by higher layer protocols
[4] [5] [19] [21] [23] [24]. In this paper we do not discuss
the roots of self-similarity, instead, we demonstrate how the
induced self-similarity is propagated and spread in the net-work
by lower layer adaptive protocols, in particular, by
TCP, which represents the dominant transport protocol of
the Internet.
The phenomenon of self-similarity was observed in data networks
in [11] [12], followed by several experimental papers
showing fractal characteristics in other types of networks
and tra-c, e.g., in video tra-c [2] [9] or in ATM networks
[15]. A comprehensive bibliographical guide is presented in
[25]. These observations have seriously questioned the validity
of previous short memory models when applied to net-work
performance analysis [19]. The impact of self-similar
models on queuing performance has been investigated in a
number of papers [3] [6] [16].
Considerable eort has been made to explore the causes
of this phenomenon. In [4] the authors argue that self-similarity
is induced by the heavy-tailed distribution of le
sizes found in Web tra-c. In [24] Ethernet LAN tra-c was
modeled as a superposition of independent On/O processes
with On and O periods having heavy-tailed distributions.
An important related theoretical result [21] proves that the
superposition of a large number of such independent alternating
On/O processes converges to Fractional Gaussian
Noise.
To prove the validity of this model in TCP/IP networks,
several papers have investigated the connection between application
level le sizes, user think-times, and the On/O
model. As there are several layers between the application
and the link layer, it is of primary importance to investigate
how protocols convert and transfer heavy-tails through
the protocol stack down to lower layers. The eect of TCP
and UDP transport protocols are investigated in [7] [17] [18]
and it is found that TCP preserves long-range dependence
(LRD) from application to link-layer.
Based on this result, the authors of [7] and [8] argue that
transport mechanisms aect strongly the short timescale behavior
of tra-c, but they have no impact in large timescales.
In this paper we demonstrate that this statement is valid
only for the local behavior of TCP when only the tra-c of a
single link is investigated. In contrast, in the network case
a surprisingly complex mechanism is present.
TCP uses an end-to-end congestion control algorithm to
continuously adapt its rate to actual network conditions.
If network conditions are governed by large timescale
uc-
tuations, then TCP will \sense" this and react accordingly.
This paper shows that TCP adapts to tra-c rate
uctua-
tions on several timescales e-ciently. Moreover, we demonstrate
that TCP can be modeled as a linear system above
a characteristic timescale of a few round-trip times, which
implies that the correlation structure of a background tra-c
stream is taken over faithfully by an adaptive TCP
ow. In
particular, it is shown that TCP can inherit self-similarity
from a self-similar background tra-c stream. Since TCP has
an end-to-end control, while adapting to these
uctuations,
it propagates self-similarity encountered on its path all along
from the source to the destination host.
We also demonstrate that if a TCP stream is multiplexed
with another one, it can pass on self-similar scaling to the
other TCP stream, depending on network conditions. In
our model the network is regarded as a mesh of end-to-end
adaptive streams. Intertwined TCP streams can spread self-similarity
throughout the network contributing to global
scaling. By analyzing the eects from a network point of
view we argue that, on one hand, TCP plays an important
role in balancing and propagating global scaling. On
the other hand, it keeps local scaling intact where it is already
strong. This way we complement results reported in
[7]. The main purpose of this paper is to analyze the basic
mechanisms behind these phenomena.
To clarify our terminology, we brie
y summarize the definition
of a few basic concepts.
be a weakly stationary process representing the amount of
data transmitted in consecutive short time periods. Let
aggregated
process. X is called exactly self-similar with self-similarity
parameter H if Xk d
k and the equality
is in the sense of nite-dimensional distributions. In the
case of second-order self-similarity, X and m 1 H X (m) have
the same variance and autocorrelation. Second-order self-similarity
manifests itself in several equivalent ways, one of
them is that the spectral density of the process decays as
1 2H at the origin as f ! 0.
Throughout the paper we use the term \self-similarity" to
refer to scaling of second-order properties over some specic
timescales or asymptotically in large timescales, which is
equivalent to long-range dependence if H > 0:5 [14] [22].
We note that certain statements of the paper are also valid
in the sense of exact statistical self-similarity.
The ns-2 simulator 1 was used for the network simulations.
Several variants of TCP were investigated (Tahoe, Reno,
however, we found that the conclusions are invariant
to the TCP version.
The paper is organized as follows. A TCP measurement
is analyzed showing self-similar scaling for the tra-c of a
single long TCP connection, and a possible explanation is
presented based on a few simple assumptions in Section 2.
Section 3 investigates how TCP adapts to
uctuations on
dierent timescales, and it is shown that TCP in a bottleneck
buer can be modeled as a linear system above a
characteristic timescale of a few round-trip times. In Section
4 we investigate how an aggregate of TCP sessions with
durations of heavy-tailed and light-tailed distributions propagates
self-similarity of a background tra-c stream. Finally,
in Section 5, we present results about the spreading of self-similarity
in the network case when TCP has to pass multiple
hops and compete for resources with other TCP streams.
2. ADAPTIVITYOFTCP:APOSSIBLECAUSE
OF WIDESPREAD SELF-SIMILARITY
We carried out the following experiment. A large le was
downloaded (a tra-c trace le from the Internet Tra-c
Archive) from an FTP server (ita.ee.lbl.gov) to a client host
hops away in Hungary (serv1.ericsson.co.hu), passing
several backbone providers and even a trans-Atlantic link.
At the client side there was no other tra-c present. The
client was directly connected to an ISP by a 128 kbps leased
line. All packets were captured at the client side with the
utility 2 . The amount of bytes received was 50 Mbyte
and it was logged with a resolution of 50 ms during the le
transfer for 6900 s. The average throughput, which takes
into account the retransmissions and the TCP/IP overhead,
was about 58 kbps, i.e., some congestion were experienced
in the network. The average round-trip delay between the
server and the client was 208 ms. From the packet trace we
concluded that the version of the TCP was Reno.
Tests were performed for the presence of self-similarity. Here
we present three tests, the rst and second ones are based
on the scaling of the absolute moments (also called absolute
mean and variance-time plots [20]), and the third one is a
wavelet-based analysis [1], see Figure 1. The result of the
tests suggests asymptotic self-similarity with Hurst parameter
around 0:75.
During the experiment, there was only one connection active
on the link, so explanations based on the superposition
of heavy-tailed On/O processes or chaotic behavior [23]
are not applicable. However, the investigated TCP connection
traversed several backbone links where, due to the large
tra-c aggregations, self-similarity could arise either because
of heavy-tails or chaotic competition. Presumably, whatever
the reason for self-similarity was, the TCP connection
ns (version 2)
http://www-mash.CS.Berkeley.EDU/ns
2 Tcpdump is available at http://www-nrg.ee.lbl.gov/
log10(m)4.55.5
log10(Absval)
log10(m)9.011.0log10(Var)
Octave
Logscale Diagram
Figure
1: Scaling analysis of the tra-c generated by
a le transfer logged at the client side. a) Absolute
mean method H 0:76. b) Variance-time plot H
0:77. c) Wavelet analysis H 0:74 [0.738, 0.749].
Bottleneck buffer
Host A
LRD traffic
Host B
bottleneck
Network before
bottleneck
Network after
Router R
LRD
LRD
Figure
2: Network model
adapted to the background tra-c stream at the bottleneck
link, and the eect of the adaptation was that self-similarity
was propagated to the measurement point. Next a simple
analytic model is introduced supporting this argument.
All relevant components of the simplied network model are
depicted in Figure 2. A single greedy TCP connection sends
data between host A and host B. The path of the connection
consists of three parts: a network cloud before and after
router R and a bottleneck buer in router R, where the connection
has to share service capacity and buer space with
a self-similar background tra-c
ow. Self-similarity of the
background tra-c can be induced, for example, by large aggregations
of innite variance On/O streams as suggested
in [4]. In the analytic model it is assumed that TCP can
adapt ideally to a background tra-c stream in a bottleneck
buer. Under \ideal adaptivity" we mean that the TCP
connection is able to consume all remaining capacity unused
by the background tra-c stream. It is also assumed
that the TCP connection does not have any eect on the
background tra-c. The generality of this assumption covers
several practical cases, for example, if the background
ow is a large aggregate consisting of a large number of con-
nections. The limits of these assumptions are analyzed later
in the paper.
Denote the background tra-c rate by B(t), 0 B(t) C,
where C is the service rate of the bottleneck buer in bit
per seconds. If TCP congestion control is \ideal" and its
eect on the background tra-c is neglected, then the TCP
connection will utilize all unused service in the bottleneck.
The rate of the \ideal" TCP
ow is denoted by A(t):
The resulting process is simply a shifted and inverted version
of B(t), which implies that the correlation structure of
processes A(t) and B(t) are the same. In other words, TCP
\inherits" the statistical properties of the background pro-
cess. In particular, let us model the background tra-c rate
as Fractional Gaussian Noise
aNH (t) (1)
where m is the average rate in bit per seconds [bps], a is the
variance, and NH (t) is a normalized FGN process with Hurst
parameter H. Note that FGN is a discrete time process, so
the rate at time t is approximated by the amount of bytes
sent during su-ciently small constant duration time periods.
Based on the arguments above, the adapting TCP will also
be an FGN with the same statistical self-similarity exponent
H. As TCP congestion control works end-to-end, the same
tra-c rate can be measured along the path before and after
router R as well. This implies that TCP propagates self-similarity
or LRD to parts of the network where otherwise
it would not be present.
The result above is based on a simple scenario using a few as-
sumptions, such as ideal TCP adaptivity, single bottleneck,
and assuming that the TCP
ow does not modify the background
tra-c characteristics. However, if the implications
of this simple scenario are valid in real TCP/IP networks,
the consequences for tra-c engineering are far reaching. Regarding
this, we are going to address the following important
questions:
1. What are the limitations of TCP adaptation, i.e., how
\ideal" is TCP congestion control when propagating
self-similarity or other statistical properties?
2. A single long-living connection was used in the simple
network model and in the measurement. Can self-similarity
be propagated by short duration TCP connections
3. The background LRD tra-c
ow used was non-adaptive.
Is self-similarity still propagated if the background traf-
ow is an aggregate of adaptive
4. We considered a single bottleneck on the TCP path.
On the other hand, in most cases TCP connections
traverse multiple routers and buers multiplexing with
multiple self-similar inputs. What are the characteristics
of the end-to-end TCP
ow in this case?
5. Is self-similarity propagated between adaptive connec-
tions, i.e., can self-similarity be inherited from one
TCP to another one that has no direct contact with
the source of self-similarity?
3. TCP AS A LINEAR SYSTEM
In the previous section it was assumed that TCP congestion
control is \ideal", which, as a matter of course, cannot be
the case in real networks. The consequence of self-similarity
is that
uctuations are not limited to a certain timescale.
When analyzing how \real" TCPs propagate self-similarity,
the adaptation of TCP to
uctuations on several timescales
should be investigated. In this section it is shown that TCP
in a bottleneck buer can be modeled as a linear system, i.e.,
takes over the correlation structure of the background
tra-c through a linear function.
TCP is an adaptive mechanism, which tries to utilize all free
resources on its path. Adaptation is performed as a complex
control loop called the congestion control algorithm.
Of course, full adaptation is not possible, as the network
does not provide prompt and explicit information about the
amount of free resources. TCP itself must test the path
continuously by increasing its sending rate gradually until
congestion is detected, signaled by a packet loss, and then it
adjusts its internal state variables accordingly. Using this al-
gorithm, TCP congestion control is able to roughly estimate
the optimal load in a few round trip times. Since congestion
control was introduced in the Internet [10], it has proved its
e-ciency in keeping network-wide congestion under control
in a wide range of tra-c scenarios.
background stream
Measurement point
Figure
3: Simulation model for the test of
TCP adaptivity to a self-similar background tra-c
stream. The two buers are identical: service rates
propagation delays
buer sizes
In this section we analyze the adaptivity of TCP, and conclude
that a simple network conguration, which consists of
a single bottleneck buer shared by a \generator"
ow and
a \response" TCP
ow, can be well modeled as a linear system
above a characteristic timescale. The cut-o timescale
depends on the path properties of the connection. The linear
system transforms certain statistical properties, e.g., au-
tocovariance, between the \generator" stream and the \re-
sponse" tra-c stream through a transform function, which
is characteristic of the network conguration.
3.1 Measuring the Adaptivity of TCP on Several
Timescales
In the rst analysis a single, long, greedy TCP stream is
mixed with random background tra-c streams. See Figure
3 for the conguration. The background streams are
constructed in a way, such that they
uctuate on a limited,
narrow timescale. To limit the timescale under investiga-
tion, the background tra-c approximates a constant amplitude
sine wave of a given frequency f : Abackground (f;
a sin(2ft +)+m where is a uniformly distributed random
variable between [0; 2]. The process Abackground (f; t)
is a stationary ergodic stochastic process with correlation
The power spectrum of this process
consists of a single frequency component at f . In the
simulation the background process had to be approximated
by a packet stream (packet size of 1000 bytes), with the result
that the spectrum is not an impulse but a narrow spike,
see
Figure
4.
If TCP is able to adapt to the
uctuations of the background
tra-c
ow, the same frequency f should appear as
a signicant spike in the power spectrum of the TCP tra-c
rate process as well. The ratio of the amplitudes of this frequency
component in the spectra is a measure of the success
of TCP adaptation on this timescale. Denote the measure
of adaptivity at frequency f by D(f)
where Sbackground (f) is the spectral density of the background
tra-c rate process at frequency f and Stcp(f) is
the spectral density of the adapting TCP rate process at
the same frequency.
Figure
4 depicts an experiment with a background signal of
0:01[1=s]. The top part of the gure shows the spectrum
of the background tra-c approximating a sine wave of
frequency f . The bottom part is the measured spectrum of
the TCP response. The spectrum of the response has a sig-
0Spectral
density
density
Figure
4: Frequency response to a sine wave of
TCP response). In this conguration the measure
of adaptivity is D(0:01) 1.
frequency [1/s]0.20.61measure
of
adaptivity
Tahoe
Reno
New Reno
Reno w delayed Ack
Figure
5: Measure of adaptivity D(f) as a function
of the frequency for several TCP variants.
nicant spike at f , but it also contains a few smaller spikes
at higher frequencies caused by the congestion control.
Conducting the experiment for a wide range of frequencies
f , it is possible to plot the adaptivity curve of TCP. Figure 5
shows the result for several versions of TCP. Note that the
shape of the function only slightly depends on the TCP ver-
sion. It can be seen that TCP adapts well to frequencies
below f0 0:15[1=s], but it cannot adapt e-ciently to
uc-
tuations on higher frequencies in this conguration.
At f0 a resonance eect can be observed, at this frequency
TCP is more aggressive, and gains even higher throughput
than what is left unused by the non-adaptive background
ow. This frequency is equal to the dominant frequency of
the TCP congestion window process when there is no background
tra-c present (idle frequency), see Figure 6. In [13] a
macroscopic model for TCP connections was published. It is
derived that if every p th packet is lost for a TCP connection,
then the congestion window process traverses a periodic saw-
frequency [1/s]2.06.010.0
Spectral
density
Figure
Spectrum of TCP congestion window process
when no background tra-c is present.
tooth and the length of the period is
RTT is the round-trip time of the path in seconds and W
is the maximum window size in packets. In our case we can
approximate is the buer size in
packets, C is the service rate in packets per second, and d is
the total round-trip propagation delay in seconds. The maximum
window size is Cd, which is the maximum
number of packets in the pipe (buer and link). This gives
an estimate of 0:15[1/s]. The
result agrees with the measured resonance frequency f0 , and
conrms our argument that the resonance eect observed in
the measure of adaptivity function D(f) is due to the TCP
window cycles (see Figure 4b).
The characteristic timescale of the TCP window cycles ranges
in relatively wide ranges in real networks, and the relation
of T RTT W=2 can be used for an approximation. For
example, if the round-trip time, which in the previous simulation
was approximately 0.33 s, is rather in the range of
a few tens of milliseconds, the cut-o timescale drops below
Even below this timescale TCP adapts to
uctuations,
though the eectiveness is limited, as shown by the transmission
curve; f0 approximately separates tra-c dynamics
to \local" and \global" scales, above f0 it is the background
process which shapes the spectrum, below f0 the spectrum
is a result of TCP control dynamics and external stochastic
processes has less impact on it.
In the next section we analyze the case when the background
tra-c stream is more complex and contains
uctuations on
several timescales.
3.2 Tests for Linearity
In real networks background tra-c is not limited to a single
timescale. In the following we analyze the case when several
frequencies are present and test whether TCP is able to
adapt to
uctuations on these timescales or not. The motivation
is to prove that TCP can adapt to
uctuations on
1e+062e+06
spectral
density
spectral
density
background
Figure
7: TCP frequency response to the superposition
of 10 random phase sine waves. top) background
tra-c, bottom) TCP response.
several timescales independently of each other, more pre-
cisely, we want to show that TCP control forms a linear
system in this conguration.
By linear system we mean that if the background tra-c
rate is given by B(t), and the adapting TCP tra-c rate
A(t) is expressed using a function , then
where is a linear function of B, i.e., (a1B1
In case of ideal adaptivity, takes
the simple form of and the TCP rate is obtained
simply as 2. If the background
tra-c is a superposition of streams
then the rate of TCP is given by
This construction provides us with a simple test on linearity:
we investigate the response to the superposition of several
streams and investigate the spectrum of the response.
Figure
7 shows the spectral density of the background and
the TCP response when the background is a composition
of 10 random phase sine waves equidistantly spaced in a
logarithmic scale (the nonzero widths of the spikes are due
to the fact that the background mix only approximates sine
waves with varying packet spacing). It can be observed that
was able to adapt to all frequency components in the
mix below
To test whether TCP really adapts to
uctuations indepen-
dently, a wide range of tra-c mixes were simulated consisting
of two frequencies f1 and f2 . A large number of simulations
were performed, covering a whole plane with the
two frequencies, in the range of [0:05; 500][1=s]. Then, the
adaptivity measure for one of the frequencies (D(f1
calculated. If the system is linear, the measure of adaptivity
function at frequency f1 should be independent of the
measure
of
adaptivity
Figure
8: Measure of TCP adaptivity D(f1 ) when
the background process is composed of two frequencies
f1 and f2 .
other frequency f2 . The results of the simulations support
our conclusions, see Figure 8.
3.3 Response to White Noise
In the previous analysis the background processes were limited
to superpositions of sine wave processes. In real networks
background tra-c streams cannot be modeled by just
a few frequency components, it is more appropriate to model
background tra-c streams as \noises".
Two types of special noises are most relevant in tra-c mod-
eling: the White Noise (WN) process and the Fractional
Gaussian Noise (FGN) process. The White Noise process is
the appropriate signal for analyzing the frequency response
of a system and the Fractional Gaussian Noise process frequently
appears as the limit process of tra-c aggregations
[21].
If TCP is a linear system, then it should transform the correlation
of any complex stochastic process, e.g., WN or FGN
through the same transform function. In this section the
response of TCP to a WN process is analyzed. WN is a
special noise as it has constant spectral density. If TCP is
linear, then it should respond with the characteristic curve
obtained previously. The result is depicted in Figure 9. The
similarity of the curve to our previous test-signal based test
supports the linearity argument. In addition, the constant
at range, which starts at a characteristic timescale and
spans several timescales upwards, provides us with information
about the timescale limitation of TCP adaptivity. Note
that this mechanism behaves like a low-pass lter.
4. TCP ADAPTATION TO SELF-SIMILAR
BACKGROUND TRAFFIC
Once we have investigated the linearity of TCP and have
shown that the transform function is
at below a characteristic
frequency, it is quite obvious to expect that TCP,
while adapting to signals of complex frequency content, reproduces
the same spectral density as the original signal
above a timescale, which depends on the path properties
(round-trip time, size of the pipe, etc.
density
Figure
9: a) TCP's frequency response to white
noise, spectral density (dots) and its smoothed version
(line). b) Measure of adaptivity D(f), see also
Figure
If, for example, TCP traverses a link where the tra-c shows
self-similarity, it will adapt to it with a spectral response
equal to the spectrum of the self-similar tra-c (asymptoti-
cally). As TCP is end-to-end control, this property is \prop-
agated" all along the TCP connection path. A visual test
can be seen in Figure 10, where tra-c rates of a self-similar
stream and an adapting TCP
are depicted. The gure shows that on larger timescales the
trace mirrors the FGN trace.
Figure
11 shows the power spectrum of the TCP and FGN
traces of Figure 10 at an aggregation level of 10ms. As suggested
in the previous section, TCP shows the same spectrum
as FGN at timescales above 1-10s, i.e., TCP tra-c
shows asymptotically second-order self-similarity with the
same scaling parameter
4.1 Can Adaptive SRDTraffic Propagate Self-Similarity
So far we have analyzed cases when long greedy TCP sessions
were mixed with background tra-c. It has been shown
that the distribution of le sizes in Web tra-c is heavy-tailed
[5]. This increases the probability of the occurrence of
such long TCP connections. Nevertheless, it is investigated
whether short duration TCPs (durations with light-tailed
distributions) have the same adaptivity property to LRD
tra-c or not. A positive answer increases the generality of
our argument. Based on previous work [21] we would expect
that if On and O durations are light-tailed, the aggregate
tra-c is short-range dependent (SRD). This section demonstrates
that TCP streams have LRD properties in spite of
the short-range dependent result suggested by the On/O
model.
During the simulation we established k parallel sessions.
Within each session TCP connections were generated independently
and the durations of TCP connections were exponentially
distributed (with mean TOn) followed by exponentially
distributed silent periods (TOff ). The simulation
was started from the equilibrium state of the process. (See
time [s]500015000bytes
per
100ms
trace
100 120 140 160 180 200500015000bytes
per
100ms
FGN trace
2000 2500 3000 3500 4000
time [s]5e+05bytes
per
1s
trace
2000 2500 3000 3500 40005e+05bytes
per
1s
FGN trace
Figure
10: Traces of FGN adapting
ows at two aggregation levels. a) 100ms aggregation
aggregation
Figure
12.) Let's denote the number of active TCPs at time
t by N(t), 0 N(t) k. With this construction N(t) is a
stationary Markov process and it is short-range dependent.
See the self-similarity tests for N(t) in Figure 13 (H 0:5).
On the other hand, if these sessions are mixed with LRD
background tra-c, the aggregate TCP tra-c, i.e., the amount
of bytes transmitted by all TCPs, is LRD (Figure 13). The
reason is that the superposition of short duration TCPs can
e-ciently adapt to a background LRD process just like one
long duration TCP connection.
A real network measurement also supports our argument.
Short les (90 kbyte) were downloaded using the wget utility
from serv1.ericsson.co.hu to locke.comet.columbia.edu (round-
time RTT 180 ms, average download rate r 160 kbps,
SACK TCP) 3 . Whenever the download ended, a new down-load
was initiated for the same le. The experiment lasted
for an hour, and the le was downloaded about 800 times.
The tra-c was captured with tcpdump at the client host.
The Variance-Time plot shows that the tra-c rate dynamics
was self-similar, inspite of the short le-sizes, see Figure
14. As a new download does not use any memory from
3 Note that the access speed at the serv1.ericsson.co.hu side
was increased to 256 kbps during this measurement.
log10(f) [1/s]2.06.0log10(S(f))
log10(f) [1/s]2.06.0log10(S(f))
Figure
11: a) Power spectrum of background tra-c
spectrum of TCP tra-c adapting
to the FGN, estimated
a previous TCP connection, long-range correlations can be
explained only by the long-memory dynamics of the net-
work. In case of smaller les, TCP's capability to adapt to
changing network conditions decreases. Although 90 kbyte
is larger than the current average le size in the Internet, it
has to be emphasized that a subset of connections is enough
to propagate self-similarity. Furthermore, if HTTP 1.1 replaces
HTTP 1.0, persistent TCP connections will be able
to adapt better to tra-c
uctuations, eventually improving
the propagation eect; similarly, if a TCP implementation
preserves some state from a previous connection, the propagation
eect is improved.
4.2 Discussion on SRD TCP Streams
For simplicity, rst assume that there is only one session
with On/O TCP connections multiplexed with LRD tra-c.
In this case N(t) takes the values 0 or 1 for exponentially
distributed durations. Assuming ideal adaptivity, when the
session is active (a TCP is active) it can grab all capacity
left unused by the background LRD tra-c. Then the tra-c
rate during the active periods of the On/O session can be
expressed by
(bit rate) left by the self-similar background tra-c, F (t)
is an FGN process, see Section 2. During inactive periods
Thus the tra-c rate of the TCP controlled On/O
background stream
On/Off TCP streams
Router 1 (C1,B1,d1)
Figure
12: Simulation model of SRD driven TCP
tra-c multiplexed with self-similar background trafc
5ms, sessions
with exponentially distributed On and O periods
with means
log10 (m)3.05.0
log10
log10 (m)5.07.09.0
log10
Figure
13: a) Absolute mean test for the On/O
process N(t) (H 0:5) and the aggregate TCP tra-c
(H 0:73). b) Variance-time plots, H 0:5 and
H 0:72, respectively.
session for all t can be written in explicit form as
Assuming that the sessions are independent of the background
process (N(t) and F (t) are independent), the auto-covariance
of A(t):
F (5)
Figure
14: Variance-time plot of tra-c generated
by short le transfers from serv1.ericsson.co.hu to
locke.comet.columbia.edu, logging resolution 100 ms,
H 0:7.
The left hand side of the product is
The same holds for F (t), and so the covariance can be written
as
F
Finally,
F
If F (t) is LRD, its autocovariance decays asymptotically as
F () F as !1, where 0 F < 1. On the other
hand, if N(t) is SRD, its autocovariance decays asymptotically
faster than N where N 1.
Consequently, the covariance of A(t) decays asymptotically
at the lower rate, in this case at the rate of the background
LRD process since F <
If the On/O process is LRD as well, e.g., the On and/or
O times are heavy-tailed, then asymptotically the larger
Hurst exponent is measured on the path. In practice, the
border of the scaling region depends on the actual shape of
the covariances and the means mA and mF .
If there are more than one On/O streams sharing the bottleneck
buer with a self-similar background tra-c stream,
takes higher values than 1 as well. However, for the
adaptivity of the aggregate it is su-cient to have at least
one active connection as it was shown in Section 4.1. The
aggregate tra-c of multiple On/O streams adapting to a
background stream may be approximated by
where (:) is the Heaviside-function,
and 0 otherwise). [N(t)] itself is also an On/O process.
Router 1 Router N
Figure
15: A TCP connection traversing multiple
hops with independent background LRD (H i ) inputs
If the On/O processes are independent and they are exponentially
distributed, then N(t) forms a Markov process
([N(t)] is the indicator process for the empty state of this
Markov chain) and it is SRD.
The conclusion of this section is that if the end-to-end service
uses TCP connections, then the tra-c generated by the
service is also adaptive, and in this case the adaptivity of
the end-to-end service is su-cient to \propagate" LRD to
other parts of the network. Moreover, if N(t) is LRD, then
the larger Hurst exponent max(HN ; HF ) is propagated.
5. SPREADING OF SELF-SIMILARITY IN
Previously we analyzed the case when a TCP connection
shares a single bottleneck buer with LRD background traf-
c, and it was only this bottleneck that aected the rate of
TCP. In this section the network case is discussed.
Two aspects are analyzed. The rst one deals with the case
when the path of an adaptive connection passes through several
buers with self-similar inputs. These buers are candidates
to become bottlenecks occasionally during the lifetime
of the connection. The second one investigates whether self-similarity
can spread from one adaptive connection to the
other causing widespread self-similarity in a network area.
The presented results are intended to highlight the basic
mechanisms, so the investigated scenarios are simplied for
the ease of discussion.
5.1 Discussion of the Multiple Link Case
A wide area TCP connection usually spans 10-15 routers
along its path, out of which there are usually several back-bone
routers with high level of aggregated tra-c, see Figure
15. A TCP connection has to adapt to the whole path.
The capacity of the end-to-end path, at time t, depends on
which buer is the bottleneck at this time. Because of tra-c
uctuations, the location of the bottleneck moves randomly
from one router to the other.
Assuming ideal end-to-end adaptivity, the rate of the adaptive
TCP connection is equal to the free capacity of the
bottleneck link at time t:
where N is the number of links and F i (t) denotes the free
capacity of the i th link on the path.
For simplicity, assume that the crossing background LRD
streams on the links are independent and the link at time
t is either empty: F i
F3 FGN H=0.6
Figure
Variance-Time plots of F i FGN processes
of respectively (identical
mean rates and variance), and the end-to-end process
(t). The end-to-end path is characterized
by H 0:8 asymptotically.
simplication the rate of the adaptive connection can be
written as
Y
In the previous section it was shown that the product of
independent LRD processes is also LRD and it is asymptotically
characterized by the largest exponent:
Thus, in the multiple link case it is the largest Hurst exponent
among the background LRD streams on the links that
characterizes the TCP connection.
For a numerical example using more complex processes, four
FGN background samples were generated with equal mean
rates, but with dierent Hurst exponents, to model F i (t),
Figure
16. The end-to-end process A(t),
which is the minimum of the FGN processes, is asymptotically
second-order self-similar, and it has the same Hurst
exponent as the largest Hurst exponent among the F i (t) pro-
cesses, i.e., the result is the same as in the simple full/empty
case.
Another possible interpretation of (13) is that we consider
the F i (t) not as rate processes, but as indicator processes of
congestion. From the end user perspective it is important
to analyze whether the network is able to support the expected
service level requirements, for example, whether the
le transfer rate degrades below an acceptable level or not.
Let F i (t) be the indicator process of link i indicating whether
the link is congested and it cannot support the expected service
rate for the connection or is not congested
1). Thus, if the background congestion indicator
processes are LRD, then it is the largest Hurst exponent
that characterizes the end-to-end service characteristics of
the investigated TCP connection.
indirect stream
FGN stream
direct stream
Figure
17: Network model for the investigation of
self-similarity spreading.
5.2 Spreading of Self-Similarity among Adaptive
Connections in Multiple Steps
So far, in all analyzed cases, adaptive tra-c was in direct
contact with self-similar background tra-c. In this section it
is investigated whether self-similarity caused by adaptation
can be passed on to adaptive tra-c streams that have no
direct contact with the source of self-similarity. A few simple
conditions are given as well. Assuming that our argument
is valid, self-similarity can spread out from a localized area,
consequently, strong self-similarity is balanced throughout a
wider area of the network.
A simple network scenario is used for the investigation. An
adaptive tra-c stream (direct stream) shares a link with
self-similar FGN tra-c. The direct
stream is mixed with another adaptive stream on a second
link, which itself has no direct connection with the FGN
tra-c (indirect stream), see Figure 17. The data rate of the
direct stream is thus aected by two other streams, and also
the two adaptive streams have an eect on one another. We
are going to investigate the statistical properties of both the
direct and the indirect streams.
Assume ideal adaptivity and max-min fairness among the
adaptive streams. Also assume that the service rates of both
links are equal (C). If the background stream was inactive,
the bottleneck would be the rst buer and the adaptive
streams would share simply half the service rate, both sending
at a rate of C=2.
A dir
In the presence of the FGN stream
owing through the second
buer, the rates can still remain C=2, unless it is the second
buer which becomes the bottleneck, i.e., when the capacity
left unused by the FGN stream is C AFGN (t) < C=2.
In this case the direct stream can use at most A dir = C
AFGN (t), so the indirect stream can grab all remaining service
capacity in the rst buer A
In short:
A dir
A indir
Calculation of the autocovariance of A dir and A indir is dicult
because of the min and max operators. We consider two
simple, extreme cases. In the rst case, the rate of the background
LRD stream is always greater than C=2, simplifying
the expressions to A dir
AFGN (t), i.e., spreading of self-similarity is ideal. In the second
extreme case the rate of background process is always
smaller than C=2, leading to A dir
i.e., self-similarity disappears from both adaptive streams.
These results has been veried by simulations as well.
log10(d)13
log10(R/S)
log10(d)13
log10(R/S)
Figure
a) R/S plot of heavy-tailed stream
0:82. b) R/S plot of indirect stream
The investigated scenario demonstrates the simplest mechanism
of how adaptive connections may have eect on each
other. We simulated a more complex scenario, where the
synthetic FGN stream is replaced by an aggregate stream
of randomly generated short TCP le transfers. The distribution
of the le sizes is heavy-tailed. The direct and
indirect TCP streams are also replaced by aggregates, but
the le sizes within these aggregates are light-tailed.
The streams consist of nheavy tailed
sessions.The le size distributions are Pareto distributions
with the following parameters: the average le size is 40
kbyte for all streams, the average waiting time between les
is 20 sec. The shape parameters are aheavy tailed = 1:1 and
a both the le size and the waiting time
distributions. With these parameters only one stream has
heavy-tails (aheavy tailed < 2).
The results of the simulation experiment are depicted in
Figure
18. As suggested in [4] the tra-c stream consisting
of heavy-tailed le downloads is LRD (H 0:82). Further-
more, the indirect tra-c stream, although it was created using
light-tailed distributions is LRD as well (H 0:71). The
cause is that long-range dependent
uctuations are propagated
via the indirect stream.
Performing the previous experiment using dierent param-
eters, we have found that depending on the tra-c mix, the
spreading between indirect and direct streams can be strong
but it can be weak as well. In certain cases, spreading to an
indirect stream does not happen at all, just like in the simple
analytic example assuming ideal TCP
ows and max-min
fairness. The exact requirements for spreading are subjects
for further study.
6. CONCLUSIONS
It was demonstrated how a TCP connection, when mixed
with self-similar tra-c in a bottleneck buer, takes on its
statistical second order self-similarity, propagating scaling
phenomena to other parts of the network. It is suggested
that the adaptation of TCP to a background tra-c stream
can be modeled by a linear system and the validity of our
approach is analyzed. It was shown that TCP inherits self-similarity
when it is mixed with self-similar background trafc
in a bottleneck buer through the transform function of
the linear system. This property was demonstrated for both
short and long duration TCP connections. We also investigated
TCP behavior in a networking environment. It was
found that if congestion periods are long-range dependent
in several hops on a connection's path, the largest Hurst
exponent characterizes the end-to-end connection. It was
also demonstrated that TCP
ows, in certain scenarios, can
pass on self-similarity to each other in multiple hops. The
presented mechanisms are basic \building blocks" in a future
wide-area tra-c model, and in real-life it is always their
combined eect that we can observe. The presented network
measurements are intended to highlight the basic mechanisms
in simplied network scenarios, when it can be assured
that only the network conditions and TCP's response
to network conditions are the cause of the investigated phe-
nomena. As thousands of parallel TCP connections continuously
intertwine the Internet, the mechanisms described in
this paper can provide us with a deeper insight why significant
and strong self-similarity is a general and widespread
phenomenon in current data networks.
7.
--R
Wavelet analysis of long-range-dependent tra-c
Heavy tra-c analysis of a storage model with long range dependent on/o sources
Experimental queuing analysis with long-range dependent packet tra-c
Dynamics of IP tra-c: A study of the role of variability and the impact of control
Data networks as cascades: Investigating the multifractal nature of Internet WAN tra-c
Congestion avoidance and control.
On the self-similar nature of Ethernet tra-c
On the self-similar nature of Ethernet tra-c (extended version)
The macroscopic behavior of the TCP congestion avoidance algorithm.
A storage model with self-similar input
On the relationship between
On the e
area tra-c: The failure of Poisson modeling
Estimators for long-range dependence: an empirical study
Proof of a fundamental result in self-similar tra-c modeling
On self-similar tra-c in ATM queues: De nitions
The chaotic nature of TCP congestion control.
A bibliographical guide to self-similar tra-c and performance modeling for modern high-speed networks
--TR
Congestion avoidance and control
On the self-similar nature of Ethernet traffic
On the self-similar nature of Ethernet traffic (extended version)
Analysis, modeling and generation of self-similar VBR video traffic
area traffic
Experimental queueing analysis with long-range dependent packet traffic
through high-variability
On self-similar traffic in ATM queues
Proof of a fundamental result in self-similar traffic modeling
The macroscopic behavior of the TCP congestion avoidance algorithm
Self-similarity in World Wide Web traffic
Data networks as cascades
Heavy-tailed probability distributions in the World Wide Web
Dynamics of IP traffic
On the relationship between file sizes, transport protocols, and self-similar network traffic
--CTR
W. Feng , P. Tinnakornsrisuphap, The failure of TCP in high-performance computational grids, Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), p.37-es, November 04-10, 2000, Dallas, Texas, United States
H. Sivakumar , S. Bailey , R. L. Grossman, PSockets: the case for application-level network striping for data intensive applications using high speed wide area networks, Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), p.37-es, November 04-10, 2000, Dallas, Texas, United States
Daniel R. Figueiredo , Benyuan Liu , Vishal Misra , Don Towsley, On the autocorrelation structure of TCP traffic, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.40 n.3, p.339-361, 22 October 2002
Guanghui He , Yuan Gao , Jennifer C. Hou , Kihong Park, A case for exploiting self-similarity of network traffic in TCP congestion control, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.45 n.6, p.743-766, 21 August 2004
Dan Rubenstein , Jim Kurose , Don Towsley, Detecting shared congestion of flows via end-to-end measurement, IEEE/ACM Transactions on Networking (TON), v.10 n.3, p.381-395, June 2002
Thomas Karagiannis , Michalis Faloutsos , Mart Molle, A user-friendly self-similarity analysis tool, ACM SIGCOMM Computer Communication Review, v.33 n.3, July
Vincenzo Liberatore, Circular arrangements and cyclic broadcast scheduling, Journal of Algorithms, v.51 n.2, p.185-215, May 2004 | TCP congestion control;long-range dependence;TCP adaptivity;self-similarity |
347556 | Staircase Failures Explained by Orthogonal Versal Forms. | Treating matrices as points in n2-dimensional space, we apply geometry to study and explain algorithms for the numerical determination of the Jordan structure of a matrix. Traditional notions such as sensitivity of subspaces are replaced with angles between tangent spaces of manifolds in n2-dimensional space. We show that the subspace sensitivity is associated with a small angle between complementary subspaces of a tangent space on a manifold in n2-dimensional space. We further show that staircase algorithm failure is related to a small angle between what we call staircase invariant space and this tangent space. The matrix notions in n2-dimensional space are generalized to pencils in 2mn-dimensional space. We apply our theory to special examples studied by Boley, Demmel, and Kgstrm. | Introduction
. The problem of accurately computing Jordan and Kronecker canonical structures
of matrices and pencils has captured the attention of many specialists in numerical linear algebra.
Standard algorithms for this process are denoted "staircase algorithms" because of the shape of the
resulting matrices [22, Page 370], but understanding of how and why they fail is incomplete. In this
paper, we study the geometry of matrices in n 2 dimensional space and pencils in 2mn dimensional
space to explain these failures. This follows a geometrical program to complement and perhaps replace
traditional numerical concepts associated with matrix subspaces that are usually viewed in n dimensional
space.
This paper targets expert readers who are already familiar with the staircase algorithm. We refer
readers to [22, Page 370] and [10] for excellent background material and we list other literature in Section
1.1 for the reader wishing a comprehensive understanding of the algorithm. On the mathematical side,
it is also helpful if the reader has some knowledge of Arnold's theory of versal forms, though a dedicated
reader should be able to read this paper without such knowledge, perhaps skipping Section 3.2.
The most important contributions of this paper may be summarized:
ffl A geometrical explanation of staircase algorithm failures
Department of Mathematics Room 2-380, Massachusetts Institute of Technology, Cambridge, MA
02139-4307, edelman@math.mit.edu, http://www-math.mit.edu/~edelman, supported by NSF grants 9501278-DMS and
9404326-CCR.
y Department of Mathematics Room 2-333, Massachusetts Institute of Technology, Cambridge, MA 02139-4307,
http://www-math.mit.edu/~yanyuan, supported by NSF grants 9501278-DMS.
ffl Identification of three significant subspaces that decompose matrix or pencil space: T b ,R, S. The
most important of these spaces is S, which we choose to call the "staircase invariant space".
ffl The idea that the staircase algorithm computes an Arnold normal form that is numerically more
appropriate than Arnold's ``matrices depending on parameters''.
ffl A first order perturbation theory for the staircase algorithm
ffl Illustration of the theory using an example by Boley [3]
The paper is organized as follows: In Section 1.1 we briefly review the literature on staircase algorithms.
In Section 1.2 we introduce concepts that we call pure, greedy and directed staircase to emphasize
subtle distinctions on how the algorithm might be used. Section 1.3 contains some important messages
that result from the theory to follow.
Section 2 presents two similar looking matrices with very different staircase behavior. Section 3
studies the relevant n 2 dimensional geometry of matrix space while Section 4 applies this theory to the
staircase algorithm. The main result may be found in Theorem 6.
Sections 5, 6 and 7 mimic Sections 2, 3 and 4 for matrix pencils. Section 8 applies the theory towards
special cases introduced by Boley [3] and Demmel and B. Kagstrom [12].
1.1. Jordan/Kronecker Algorithm History. The first staircase algorithm was given by Kubla-
novskaya for Jordan structure in 1966 [31], where a normalized QR factorization is used for rank determination
and nullspace separation. Ruhe [34] first introduced the use of the SVD into the algorithm in
1970. The SVD idea is further developed by Golub and Wilkinson [23, Section 10]. Kagstrom and Ruhe
[27, 28] wrote the first library quality software for the complete JNF reduction, with the capability of
returning after different steps in the reduction. Recently, Chatitin-Chatelin and Frayss'e [6] developed a
non-staircase "qualitative" approach.
The staircase algorithm for the Kronecker structure of pencils is given by Van Dooren [13, 14, 15] and
Kagstrom and Ruhe [29]. Kublanovskaya [32] fully analyzed the AB algorithm, however, earlier work on
the AB algorithm goes back to the 1970s. Kagstrom [25, 26] gave a RGDSVD/RGQZD algorithm and
this provided a base for later work on software. Error bounds for this algorithm are given by Demmel
and Kagstrom [8, 9]. Beelen and Van Dooren [2] gave an improved algorithm which requires O(m 2 n)
operations for m \Theta n pencils. Boley [3] studied the sensitivity of the algebraic structure. Error bounds
are given by Demmel and Kagstrom [10, 11].
Staircase algorithms are used both theoretically and practically. Elmroth and Kagstrom [19] use
the staircase algorithm to test the set of 2-by-3 pencils hence to analyze the algorithm, Demmel and
Edelman [7] use the algorithm to calculate the dimension of matrices and pencils with a given form.
Dooren [14, 20, 30, 5], Emami-Naeini [20], Kautsky and Nichols [30], Boley [5], Wicks and DeCarlo
[35] consider systems and control applications. Software for control theory is provided by Demmel and
Kagstrom [12].
A number of papers use geometry to understand Jordan and Kronecker structure problems. Fair-
grieve [21] regularizes by taking the most degenerate matrix in a neighborhood, Edelman, Elmroth and
Kagstrom [17, 18] study versality and stratifications, and Boley [4] concentrates on stratifications.
1.2. The Staircase Algorithms. Staircase algorithms for the Jordan and Kronecker form work by
making sequences of rank decisions in combination with eigenvalue computations. We wish to emphasize
a few variations on how the algorithm might be used by coining the terms pure staircase, greedy
staircase, and directed staircase. Pseudocode for the Jordan versions appear near the end of this
subsection. In combination with these three choices, one can choose an option of zeroing or not. These
choices are explained below.
The three variations for purposes of discussion are considered in exact arithmetic. The pure version
is the pure mathematician's algorithm: it gives precisely the Jordan structure of a given matrix. The
greedy version (also useful for a pure mathematician!) attempts to find the most "interesting" Jordan
structure near the given matrix. The directed staircase attempts to find a nearby matrix with a
preconceived Jordan structure. Roughly speaking, the difference between pure, greedy, and directed is
whether the Jordan structure is determined by the matrix, a user controlled neighborhood of the matrix,
or directly by the user respectively.
In the pure staircase algorithm, rank decisions are made using the singular value decomposition. An
explicit distinction is made between zero singular values and nonzero singular values. This determines
the exact Jordan form of the input matrix.
The greedy staircase algorithm attempts to find the most interesting Jordan structure nearby the
given matrix. Here the word "interesting" (or degenerate) is used in the sense of precious gems, the
rarer, the more interesting. Algorithmically, as many singular values as possible are thresholded to zero
with a user defined threshold. The more singular values that are set to 0, the rarer in the sense of
codimension (see [7, 17, 18]).
The directed staircase algorithm allows the user to decide in advance what Jordan structure is
desired. The Jordan structure dictates which singular values are set to 0. Directed staircase is used in a
few special circumstances. For example, it is used when separating the zero Jordan structure from the
right singular structure (used in GUPTRI [10, 11]). Moreover, Elmroth and Kagstrom imposed structures
by the staircase algorithm in their investigation of the set of 2 \Theta 3 pencils [19]. Recently, Lippert and
Edelman [33] use directed staircase to compute an initial guess for a Newton minimization approach to
computing the nearest matrix with a given form in the Frobenius norm.
In the greedy and directed modes if we explicitly zero the singular values, we end up computing a
new matrix in staircase form that has the same Jordan structure as a matrix near the original one. If we
do not explicitly zero the singular values, we end up computing a matrix that is orthogonally similar
to the original one (in the absence of roundoff errors), that is nearly in staircase form. For example,
in GUPTRI [11], the choice of whether to zero the singular values is made by the user with an input
parameter named zero which may be true or false.
To summarize the many choices associated with a staircase algorithm, there are really five distinct
algorithms worth considering: the pure algorithm stands on its own, otherwise the two choices of combinatorial
structure (greedy and directed) may be paired with the choice to zero or not. Thereby we have
the five algorithms:
1. pure staircase
2. greedy staircase with zeroing
3. greedy staircase without zeroing
4. directed staircase with zeroing
5. directed staircase without zeroing
Notice that in the pure staircase, we do not specify zeroing or not, since both will give the same
result vacuously.
Of course algorithms run in finite precision. One further detail is that there is some freedom in
the singular value calculations which lead to an ambiguity in the staircase form: in the case of unequal
singular values, an order must be specified, and when singular values are equal, there is a choice of basis
to be made. We will not specify any order for the SVD, except that all singular values considered to be
zero appear first.
In the ith loop iteration, we use w i to denote the number of singular values that are considered to
be 0. For the directed algorithm, w i are input, otherwise, w i are computed. In pseudocode, we have the
following staircase algorithms for computing the Jordan form corresponding to eigenvalue .
INPUT:
specify pure, greedy, or direct mode
specify zeroing or not zeroing
OUTPUT:
matrix A that may or may not be in staircase form
while A tmp not full rank
Use the SVD to compute an n tmp by n tmp unitary matrix V whose leading w i columns
span the nullspace or an approximation
Choice I: Pure: Use the SVD algorithm to compute w i and the exact nullspace
Choice II: Greedy: Use the SVD algorithm and threshold the small singular values with
a user specified tolerance, thereby defining w i . The corresponding singular vectors
become the first w i vectors of V .
Choice III: Directed: Use the SVD algorithm, the w i are defined from the input Jordan
structure. The w i singular vectors are the first w i columns of V .
Let A tmp be the lower right n corner of A
endwhile
If zeroing, return A in the form I + a block strictly upper triangular matrix.
While the staircase algorithm often works very well, it has been known to fail. We can say that
the greedy algorithm fails if it does not detect a matrix with the least generic form [7] possible within
a given tolerance. We say that the directed algorithm fails if the staircase form it produces is very far
(orders of magnitude, in terms of the usual Frobenious norm of matrix space) from the staircase form
of the nearest matrix with the intended structure. In this paper, we mainly concentrate on the greedy
staircase algorithm and its failure, but the theory is applicable to both approaches. We emphasize that
we are intentionally vague about how "far" is "far" as this may be application dependent, but we will
consider several orders of magnitude to constitute the notion of "far".
1.3. Geometry of Staircase and Arnold forms. Our geometrical approach is inspired by
Arnold's theory of versality [1]. For readers already familiar with Arnold's theory, we point out that we
have a new normal form that enjoys the same properties as Arnold's original form, but is more useful
numerically. For numerical analysts, we point out that these ideas are important for understanding the
staircase algorithm. Perhaps it is safe to say that numerical analysts have had an "Arnold Normal Form"
for years, but we did not recognize as such - the computer was doing it for us automatically.
The power of the normal form that we introduce in Section 3 is that it provides a first order rounding
theory of the staircase algorithm. We will show that instead of decomposing the perturbation space into
the normal space and a tangent space at a matrix A, the algorithm chooses a so called staircase invariant
space to take the place of the normal space. When some directions in the staircase invariant space are
very close to the tangent space, the algorithm can fail.
From the theory, we decompose the matrix space into three subspaces that we call T b , R and S, the
precise definitions of the three spaces are given in Definitions 1 and 3. Here, T b and R are two subspaces
of the tangent space, and S is a certain complimentary space of the tangent space in the matrix space.
For the impatient reader, we point out that angles between these spaces are related to the behavior of
the staircase algorithm; note that R is always orthogonal to S. (We use ! \Delta; \Delta ? to represent the angle
between two spaces.)
angles components
A Staircase fails !
no weak stair no large large =2 small small
stair no large small =2 small large
small =2 large large
Here, by a weak stair [16], we mean the near rank deficiency of any superdiagonal block of the strictly
block upper triangular matrix A.
2. A Staircase Algorithm Failure to Motivate the Theory. Consider the two matrices
where ffi =1.5e-9 is approximately on the order of the square root of the double precision machine
roughly 2.2e-16. Both of these matrices clearly have the Jordan structure J 3 (0), but the
staircase algorithm on A 1 and A 2 can behave very differently.
To test this, we used the GUPTRI [11] algorithm. GUPTRI
1 requires an input matrix A and two
tolerance parameters EPSU and GAP. We ran GUPTRI on ~
2.2e-14 is roughly 100 times the double precision machine ffl. The singular values of each of the
two matrices ~
A 1 and ~
A 2 are oe 8.8816e-15. We set GAP to
be always 1, and let
we vary the value of a (The tolerance is effectively
a). Our observations are tabulated below.
a computed Jordan Structure for ~
A 1 computed Jordan Structure for ~
a
Here, we use J k () to represent a k \Theta k Jordan block with eigenvalue . In the table, typically
ff 6= fi 6= 0. Setting a small (smaller than here, which is the smaller singular value
in the second stage), the software returns two nonzero singular values in the first and second stages
of the algorithm and one nonzero singular value in the third stage. Setting EPSU \Theta GAP large (larger
than oe 2 here), we zero two singular values in the first stage and one in the second stage giving the
structure J 2 (0) \Phi J 1 (0) for both ~
A 1 and ~
(There is a matrix within O(10 \Gamma9 ) of A 1 and A 2 of the form
(0)). The most interesting case is in between. For appropriate EPSU \Theta GAP a (between fl
and oe 2 here), we zero one singular value in each of the three stages, getting a J 3 (0) which is O(10 \Gamma14 )
away for A 2 , while we can only get a J 3 (0) which is O(10 \Gamma6 ) away for A 1 . In other words, the staircase
algorithm fails for A 1 but not for A 2 . As pictured in Figure 2.1, the A 1 example indicates that a matrix
1 GUPTRI [10, 11] is a "greedy" algorithm with a sophisticated thresholding procedure based on two input parameters
EPSU and GAP 1. We threshold oe (Defining oe n+1 j 0). The first
argument of the maximum oe k ensures a large gap between thresholded and non-thresholded singular values. The second
argument ensures that oe k\Gamma1 is small. Readers who look at the GUPTRI software should note that singular values are
ordered from smallest to largest, contrary to modern convention.
of the correct Jordan structure may be within the specified tolerance, but the staircase algorithm may
fail to find it.
Consider the situation when A 1 and A 2 are transformed using a random orthogonal matrix Q. As
a second experiment, we pick
\Gamma:39878 :20047 \Gamma:89487
\Gamma:84538 \Gamma:45853 :27400
and take ~
This will impose a perturbation of order ffl. We
ran GUPTRI on these two matrices; the following is the result:
a computed Jordan Structure for ~
A 1 computed Jordan Structure for ~
a
In the table, other values are the same as in the previous table.
In this case, GUPTRI is still able to detect a J 3 structure for ~
although the one it finds is O(10 \Gamma6 )
away. But it fails to find any J 3 structure at all for ~
A 1 . The comparison of A 1 and A 2 in the two
experiments indicates that the explanation is more subtle than the notion of a weak stair (a superdiagonal
block that is almost column rank deficient) [16].
In this paper we present a geometrical theory that clearly predicts the difference between A 1 and A 2 .
The theory is based on how close certain directions that we will denote staircase invariant directions
are to the tangent space of the manifold of matrices similar to the matrix with specified canonical form.
It turns out that for A 1 , these directions are nearly in the tangent space, but not for A 2 . This is the
crucial difference!
The tangent directions and the staircase invariant directions combine to form a "versal deformation"
in the sense of Arnold [1], but one with more useful properties for our purposes.
3. Staircase Invariant Space and Versal Deformations.
3.1. The Staircase Invariant Space and Related Subspaces. We consider block matrices
as in
Figure
3.1. Dividing a matrix A into blocks of row and column sizes we obtain a
general block matrix. A block matrix is conforming to A if it is also partitioned into blocks of
JA
Fig. 2.1. The staircase algorithm fails to find A1 at distance 2.2e-14 from ~
but does find a J3 (0) or a J2 (0) \Phi J1 (0)
if given a much larger tolerance. (The latter is ffi away from ~
.)
in the same manner as A. If a general block matrix has non-zero entries only in the
upper triangular blocks excluding the diagonal blocks, we call it a block strictly upper triangular
matrix. If a general block matrix has non-zero entries only in the lower triangular blocks including the
diagonal blocks, we call it a block lower triangular matrix. A matrix A is in staircase form if we
can divide A into blocks of sizes A is a strictly block upper triangular matrix
and every superdiagonal block has full column rank. If a general block matrix only has nonzero entries
on its diagonal blocks, and each diagonal block is an orthogonal matrix, we call it a block diagonal
orthogonal matrix. We call the matrix e B a block orthogonal matrix (conforming to
a block anti-symmetric matrix (conforming to (i.e. B is anti-symmetric with zero diagonal blocks.
Here, we abuse the word "conforming" since e B does not have a block structure.)
Definition 1. Suppose A is a matrix in staircase form. We call S a staircase invariant matrix
of A if S T is block lower triangular. We call the space of matrices consisting of all such S
the staircase invariant space of A, and denote it by S.
We remark that the columns of S will not be independent except possibly when can be
the zero matrix as an extreme case. However the generic sparsity structure of S may be determined by
general block matrix
block strictly upper
triangular matrix
block lower
triangular matrix00000000000000000000000000000011111111111111111111111111111111111111110000000000000011111111111111000000011111111111111
matrix in staircase
block diagonal
orthogonal matrix
block orthogonal
matrix
arbitrary block0000000000000000000011111111111111111111
full column rank block00000000000000111111111111111111111
orthogonal block special block zero block
Fig. 3.1. A schematic of the block matrices defined in the text.
the sizes of the blocks. For example, let A have the staircase form
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
\Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
\Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta \Theta \Theta \Theta \Theta \Theta \ThetaC C C C C C C C A
is a staircase invariant matrix of A if every column of S is a left eigenvector of A. Here, the ffi notation
indicates 0 entries in the block lower triangular part of S that are a consequence of the requirement that
every column be a left eigenvector. This may be formulated as a general rule: if we find more than one
block of size n i \Theta n i then only those blocks on the lowest block row appear in the sparsity structure of
S. For example, the ffi do not appear because they are above another block of size 2. As a special case,
if A is strictly upper triangular, then S is 0 above the bottom row as is shown below. Readers familiar
with Arnold's normal form will notice that if A is a given single Jordan block in normal form, then S
contains the versal directions.
\Theta \Theta \Theta \Theta \Theta \Theta
\Theta \Theta \Theta \Theta \Theta
\Theta \Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta
\Theta \Theta \Theta \Theta \Theta \Theta \ThetaC C C C C A
Definition 2. Suppose A is a matrix. We call O(A) is a non-singular matrixg
the orbit of a matrix A. We call T any matrixg the tangent space of O(A) at
A.
Theorem 1. Let A be an n \Theta n matrix in staircase form, then the staircase invariant space S of A
and the tangent space T form an oblique decomposition of n \Theta n matrix space, i.e. R
Proof:
Assume that A i;j , the (i; j) block of A, is n i \Theta n j for and of course A
There are n 2
1 degrees of freedom in the first block column of S because there are n 1 columns and
each column may be chosen from the n 1 dimensional space of left eigenvectors of A. Indeed there are n 2
degrees of freedom in the ith block, because each of the n i columns may be chosen from the n i dimensional
space of left eigenvectors of the matrix obtained from A by deleting the first rows and columns.
The total number of degrees of freedom is
i , which combined with dim(T
gives the dimension of the whole space n 2 .
If S 2 S is also in T then S has the form AX \Gamma XA for some matrix X. Our first step will be
to show that X must have block upper triangular form after which we will conclude that AX \Gamma XA is
strictly block upper triangular. Since S is block lower triangular, it will then follow that if it is also in
must be 0.
Let i be the first block column of X which does not have block upper triangular structure. Clearly
the ith block column of XA is 0 below the diagonal block, so that the ith block column of
contains vectors in the column space of A. However every column of S is a left eigenvector of A from the
definition (notice that we do not require these column vectors of S to be independent, the one Jordan
block case is a good example.), and therefore orthogonal to the column space of A. Thus the ith block
column of S is 0, and from the full column rank conditions on the superdiagonal blocks of A, we conclude
that X is 0 below the block diagonal.
Definition 3. Suppose A is a matrix. We call O is a block anti-symmetric
matrix conforming to Ag the block orthogonal-orbit of a matrix A. We call
is a block anti-symmetric matrix conforming to Ag the block tangent space of the block
orthogonal orbit O b (A) at A. We call R j f block strictly upper triangular matrix conforming to Ag the
strictly upper block space of A.
Note that because of the complementary structure of the two matrices R and S, we can see that S
is always orthogonal to R.
Theorem 2. Let A be an n \Theta n matrix in staircase form, then the tangent space T of the orbit
O(A) can be split into the block tangent space T b of the orbit O b (A) and the strictly upper block space
Proof:
We know that the tangent space T of the orbit at A has dimension
. If we decompose
into a block upper triangular matrix and a block anti-symmetric matrix, we can decompose every
strictly upper triangular matrix and a matrix in T b . Since R, each of T b
and R has dimension 1=2(n
must both be exactly of dimension 1=2(n
Thus we know that they actually form a decomposition of T , and the strictly upper block space R can
also be represented as R j conforming to Ag:
Corollary 1. R n 2
Figure 3.2.
In Definition 3, we really do not need the whole set
merely need a small neighborhood around Readers may well wish to skip ahead to Section 4, but
for those interested in mathematical technicalities we review a few simple concepts. Suppose that we
have partitioned An orthogonal decomposition of n-dimensional space into k mutually
orthogonal subspaces of dimensions is a point on the flag manifold. (When this is
the Grassmann manifold). Equivalently, a point on the flag manifold is specified by a filtration, i.e.,
a nested sequence of subspaces V i of dimension
The corresponding decomposition can be written as
This may be expressed concretely. If from a unitary matrix U , we only define V i for
A
R
Fig. 3.2. A diagram of the orbits and related spaces. The similarity orbit at A is indicated by a surface O(A), the
block orthogonal orbit is indicated by a curve O b (A) on the surface, the tangent space of O b (A), T b is indicated by a line,
R which lies on O(A) is pictured as a line too, and the staircase invariant space S is represented by a line pointing away
from the plane.
the span of the first n 1 a point on the flag
manifold. Of course many unitary matrices U will correspond to the same flag manifold point. In an
open neighborhood of fe B g, near the point e the map between fe B g and an open subset of the
flag manifold is a one to one homeomorphism. The former set is referred to as a local cross section [24,
Lemma 4.1, page 123] in Lie algebra. No two unitary matrices in a local cross section would have the
same sequence of subspaces
3.2. Staircase as a Versal Deformation. Next, we are going to build up the theory of our versal
form. Following Arnold [1], a deformation of a matrix A, is a matrix A() with entries that are power
series in the complex variables i , where convergent in a neighborhood of
with A.
A good introduction to versal deformations may be found in [1, Section 2.4] or [17]. The key property
of a versal deformation is that it has enough parameters so that no matter how the matrix is perturbed,
it may be made equivalent by analytic transformations to the versal deformation with some choice of
parameters. The advantage of this concept for a numerical analyst is that we might make a rounding
error in any direction and yet still think of this as a perturbation to a standard canonical form.
Let ae M be a smooth submanifold of a manifold M . We consider a smooth mapping A : !M
of another manifold into M , and let be a point in such that A() 2 N . The mapping A is called
transversal to N at if the tangent space to M at A() is the direct sum
Here, TM A() is the tangent space of M at A(), TN A() is the tangent space of N at A(), T
is the tangent space of at and A is the mapping from T to TM A() induced by A (It is the
Jacobian).
Theorem 3. Suppose A is in staircase form. Fix S i 2
dim(S). It follows that
is a versal deformation of every particular A() for small enough. A() is miniversal at
is a basis of S.
Proof:
Theorem 1 tells us the mapping A() is transversal to the orbit at A. From the equivalence of transversality
and versality [1], we know that A() is a versal deformation of A. Since the dimension of the
staircase invariant space S is the codimension of the orbit, A() given by Equation (3.1) is a miniversal
deformation if the S i are a basis for S (i.e. dim(S)). More is true, A() is a versal deformation
of every matrix in a neighborhood of A, in other words, the space S is transversal to the orbit of every
A(). Take a set of matrices X i s.t. the X i A \Gamma AX i form a basis of the tangent space T of the orbit at
A. We know T \Phi
, here \Phi implies T " so there is a fixed minimum angle ' between T
and S. For small enough , we can guarantee that the X i are still linearly independent
of each other and they span a subspace of the tangent space at A() that is at least, say, '=2 away from
S. This means that the tangent space at A() is transversal to S.
Arnold's theory concentrates on general similarity transformations. As we have seen above, the
staircase invariant directions are a perfect versal deformation. This idea can be refined to consider
similarity transformations that are block orthogonal. Everything is the same as above, except that we
add the block strictly upper triangular matrices R to compensate for the restriction to block orthogonal
matrices. We now spell this out in detail:
Definition 4. If the matrix C() is block orthogonal for every , then we refer to the deformation
as a block orthogonal deformation.
We say that two deformations A() and B() are block orthogonally-equivalent if there exists
a block orthogonal deformation C() of the identity matrix such that
We say that a deformation A() is block orthogonally-versal if any other deformation B()
is block orthogonally-equivalent to the deformation A(OE()). Here, OE is a mapping analytic at 0 with
Theorem 4. A deformation A() of A is block orthogonally-versal iff the mapping A() is transversal
to the block orthogonal-orbit of A at
Proof:
The proof follows Arnold [1, Sections 2.3 and 2.4] except that we use the block orthogonal version of the
relevant notions, and we remember that the tangents to the block orthogonal group are the commutators
of A with the block anti-symmetric matrices.
Since we know that T can be decomposed into T b \Phi R, we get:
Theorem 5. Suppose a matrix A is in staircase form. Fix S i 2
and k dim(S). Fix R It follows that
is a block orthogonally-versal deformation of every particular A() for small enough. A() is block
orthogonally-miniversal at A if fS i g, fR j g are bases of S and R.
It is not hard to see that the theory we set up for matrices with all eigenvalues 0 can be generalized
to a matrix A with different eigenvalues. The staircase form is a block upper triangular matrix, each
of its diagonal blocks of the form staircase form defined at the beginning of this
chapter, and superdiagonal blocks arbitrary matrices. Its staircase invariant space is spanned by the
block diagonal matrices, each diagonal block being in the staircase invariant space of the corresponding
diagonal block A i . R space is spanned by the block strictly upper triangular matrices s.t. every diagonal
block is in the R space of the corresponding A i . T b is defined exactly the same as in the one eigenvalue
case. All our theorems are still valid. When we give the definitions or apply the theorems, we do not
really use the values of the eigenvalues, all that is important is how many different eigenvalues A has.
In other words, we are working with bundle instead of orbit.
These forms are normal forms that have the same property as the Arnold's normal form: they are
continuous under perturbation. The reason that we introduce block orthogonal notation is that the
staircase algorithm is a realization to first order of the block orthogonally-versal deformation, as we will
see in the next section.
4. Application to Matrix Staircase Forms. We are ready to understand the staircase algorithm
described in Section 1.2. We concentrate on matrices with all eigenvalues 0, since otherwise, the staircase
algorithm will separate other structures and continue recursively.
We use the notation stair(A) to denote the output A of the staircase algorithm as described in
Section 1.2. Now suppose that we have a matrix A which is in staircase form. To zeroth order, any
instance of the staircase algorithm replaces A with "
diagonal orthogonal.
Of course this does not change the staircase structure of A; the Q 0 represents the arbitrary rotations
within the subspaces, and can depend on how the software is written, and the subtlety of roundoff errors
when many singular values are 0. Next, suppose that we perturb A by fflE. According to Corollary 1, we
can decompose the perturbation matrix uniquely as
Theorem 6 states that in addition to some block diagonal matrix Q 0 , the staircase algorithm will apply
a block orthogonal similarity transformation to kill the perturbation in
Theorem 6. Suppose that A is a matrix in staircase form and E is any perturbation matrix. The
staircase algorithm (without zeroing) on A + fflE will produce an orthogonal matrix Q (depending on ffl)
and the output matrix
A has the same
staircase structure as A, "
S is a staircase invariant matrix of "
A and "
R is a block strictly upper triangular
matrix. If singular values are zeroed out, then the algorithm further kills "
S and outputs "
R+ o(ffl).
Proof:
After the first stage of the staircase algorithm, the first block column is orthogonal to the other columns,
and this property is preserved through the completion of the algorithm. Generally, after the ith iteration,
the ith block column below (including) the diagonal block is orthogonal to all other columns to its right,
and this property is preserved all through. So when the algorithm terminates, we will have a matrix
whose columns below (including) the diagonal block are orthogonal to all the columns to the right, in
other words, it is a matrix in staircase form plus a staircase invariant matrix.
We can always write the similarity transformation matrix as
a block diagonal orthogonal matrix and X is a block anti-symmetric matrix that does not depend on ffl
because of the local cross section property that we mentioned at the beginning of Section 3. Notice that
is not a constant matrix decided by A, it depends on fflE to its first order, we should have written
instead of Q 0 . However, we do not expand Q 0 since as long as it is a block diagonal
orthogonal transformation, it does not change the staircase structure of the matrix. Hence, we get
R and "
T b are respectively Q T
. It is easy to check that
T b is still in the space of "
A. X is a block anti-symmetric matrix satisfying "
AX.
We know that X is uniquely determined because the dimensions of "
b and the block anti-symmetric
matrix space are the same. The reason that "
AX hence the last equality in (4.1) holds is
because the algorithm forces the output form as described in the first paragraph of this proof: "
R
is in staircase form and ffl "
S is a staircase invariant matrix. Since (S \Phi R) " T b is the zero matrix, the T b
term must vanish.
To understand more clearly what this observation tells us, let us check some simple situations. If
the matrix A is only perturbed in the direction S or R, then the similarity transformation will be simply
a block diagonal orthogonal matrix Q 0 . If we ignore this transformation which does not change any
structure, we can think of the output to be unchanged from the input, this is the reason we call S the
staircase invariant space. The reason we did not include R into the staircase invariant space is that
fflR is still within O b (A). If the matrix A is only perturbed along the block tangent direction T b ,
then the staircase algorithm will kill the perturbation and do a block diagonal orthogonal similarity
transformation.
Although the staircase algorithm decides this Q 0 step by step all through the algorithm (due to SVD
rank decisions), we can actually think of the Q 0 as decided at the first step. We can even ignore this Q 0
because the only reason it comes up is that the svd we use follows a specific way to sort singular values
when they are different, and to choose the basis of the singular vector space when the same singular
values appear.
We know that every matrix A can be reduced to a staircase form under an orthogonal transformation,
in other words, we can always think of any general matrix M as P T AP , where A is in staircase form.
Thus in general, the staircase algorithm always introduces an orthogonal transformation and returns a
matrix in staircase form and a first order perturbation in its staircase invariant direction, i.e. stair(M
It is now obvious that if a staircase form matrix A has its S and T almost normal to each other,
then the staircase algorithm will behave very well. On the other hand, if S is very close to T then it
will fail. To emphasize this, we write it as a conclusion.
Conclusion 1. The angle between the staircase invariant space S and the tangent space T decides
the behavior of the staircase algorithm. The smaller the angle, the worse the algorithm behaves.
In the one Jordan block case, we have an if-and-only-if condition for S to be near T .
Theorem 7. Let A be an n \Theta n matrix in staircase form and suppose that all of its block sizes are
1 \Theta 1, then S(A) is close to T (A) iff the following two conditions hold:
(1)(row condition) there exists a non-zero row in A s.t. every entry on this row is o(1);
(2)(chain condition) there exists a chain of length with the chain value O(1), where k is the lowest
row satisfying (1).
Here, we call A i 1 ;i 2
a chain of length t and the product A i 1 ;i 2
is the
chain value.
Proof
Notice that S being close to T is equivalent to S being almost perpendicular to N , the normal space
of A. In this case, N is spanned by fI; A T ; A consists of matrices with nonzero
entries only in the last row. Considering the angle between any two matrices from the two spaces, it is
straightforward to show that S is almost perpendicular to N is equivalent to
(1) there exists a k s.t. the (n; entry of each of the matrices I; A
(2) if the entry is o(1), then it must have some other O(1) entry in the same matrix. Assume k is
the largest choice if there are different k's. By a combinatorial argument, we can show that these two
conditions are equivalent to the row and chain conditions respectively in our theorem.
Remark 1. Note that there exists an O(1) entry in a matrix is equivalent to say that there exists
a singular value of the matrix of O(1). So, the chain condition is the same as saying that the singular
values of A n\Gammak are not all O(ffl) or smaller.
Generally, we do not have an if-and-only-if condition for S to be close to T , we only have a necessary
condition, that is, only if at least one of the superdiagonal blocks of the original unperturbed matrix has
a singular value almost 0, i.e. it has a weak stair, will S be close to T . Actually, it is not hard to show
that the angle between T b and R is at most in the same order as the smallest singular value of the weak
stair. So, when the perturbation matrix E is decomposed into R are typically very
large, but whether S is large or not depends on whether S is close to T or not.
Notice that equation (4.1) is valid for sufficiently small ffl. What range of ffl is "sufficiently small"?
Clearly, ffl has to be smaller than the smallest singular value ffi of the weak stairs. Moreover, the algorithm
requires the perturbation along T and S to be both smaller than ffi. Assume the angle between T and
S is ', then generally, when ' is large, we would expect an ffl smaller than ffi to be sufficiently small.
However, when ' is close to 0, for a random perturbation, we would expect an ffl in the order of ffi=' to be
sufficiently small. Here, again, we can see that the angle between S and T decides the range of effective
ffl. For small ', when ffl is not sufficiently small, we observed some discontinuity in the 0th order term in
Equation (4.1) caused by the ordering of singular values during certain stages of the algorithm. Thus,
instead of the identity matrix, we get a permutation matrix in the 0th order term.
The theory explains why the staircase algorithm behaves so differently on the two matrices A 1 and
A 2 in Section 2. Using Theorem 7, we can see that A 1 is a staircase failure 2 is not
1). By a direct calculation, we find that the tangent space and the staircase invariant space of A 1 is
very close
this is not the situation for A 2
3).
When transforming to get ~
A 1 and ~
A 2 with Q, which is an approximate orthogonal matrix up to the
order of square root of machine precision ffl m , another error in the order of
it is comparable with ffi in our experiment, so the staircase algorithm actually runs on a shifted version
That is why we see R as large as an O(10 \Gamma6 ) added to J 3 in the second table
for ~
A 2 . We might as well call A 2 a staircase failure in this situation, but A 1 suffers a much worse failure
under the same situation, in that the staircase algorithm fails to detect a J 3 structure at all. This is
because the tangent space and the staircase invariant space are so close that the S and T component
are very large hence Equation (4.1) does not apply any more.
5. A Staircase Algorithm Failure to Motivate the Theory for Pencils. The pencil analog
to the staircase failure in Section 2 is
1.5e-8. This is a pencil with the structure L 1 \Phi J 2 (0). After we add a random perturbation
of size 1e-14 to this pencil, GUPTRI fails to return back the original pencil no matter which EPSU we
choose. Instead, it returns back a more generic L 2 \Phi J 1 (0) pencil O(ffl) away.
On the other hand, for another pencil with the same L 1 \Phi J 2 (0) structure:
GUPTRI returns an L 1 \Phi J 2 (0) pencil O(ffl) away.
At this point, readers may correctly expect that the reason behind this is again the angle between
two certain spaces as in the matrix case.
6. Matrix Pencils. Parallel to the matrix case, we can set up a similar theory for the pencil
case. For simplicity, we concentrate on the case when a pencil only has L-blocks and J(0)-blocks.
Pencils containing L T -blocks and non-zero (including 1) eigenvalue blocks can always be reduced to
the previous case by transposing and exchanging the two matrices of the pencil and/or shifting.
6.1. The Staircase Invariant Space and Related Subspaces for Pencils. A pencil (A; B) is
in staircase form if we can divide both A and B into block rows of sizes r columns
of sizes s strictly block upper triangular with every superdiagonal block having full
column rank and B is block upper triangular with every diagonal block having full row rank and the
rows orthogonal to each other. Here we allow s k+1 to be zero. A pencil is called conforming to (A; B)
if it has the same block structure as (A; B). A square matrix is called row (column) conforming to
if it has diagonal block sizes the same as the row (column) sizes of (A; B).
Definition 5. Suppose (A; B) is a pencil in staircase form and B d is the block diagonal part of
B. We call (SA ; SB ) a staircase invariant pencil of (A; B) if S T
has complimentary structure to (A; B). We call the space consisting of all such (SA ; SB ) the staircase
invariant space of (A; B), and denote it by S.
For example, let (A; B) have the staircase form
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
\Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
then
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta
\Theta \Theta \Theta \Theta \Theta \Theta \Theta \Theta7 7 7 7 7 7 7 5
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta
\Theta \Theta \Theta ffi ffi
is a staircase invariant pencil of (A; B) if every column of SA is in the left null space of A and every
row of SB is in the right null space of B. Notice that the sparsity structure of SA and SB is at most
complimentary to that of A and B respectively, but SA and SB are often less sparse, because of the
requirement on the nullspace. To be precise, if we find more than one diagonal block with the same
size, then among the blocks of this size, only the blocks on the lowest block row appear in the sparsity
structure of SA . If any of the diagonal blocks of B is a square block, then SB has all zero entries
throughout the corresponding block column.
As special cases, if A is a strictly upper triangular square matrix and B is an upper triangular square
matrix with diagonal entries nonzero, then SA only has nonzero entries in the bottom row and SB is
simply a zero matrix. If A is a strictly upper triangular n \Theta (n + 1) matrix and B is an upper triangular
with diagonal entries nonzero, then (SA ; SB ) is the zero pencil.
Definition 6. Suppose (A; B) is a pencil. We call O(A; B) are non-singular
square matrices g the orbit of a pencil (A; B). We call T are any square
matrices g the tangent space of O(A; B) at (A; B).
Theorem 8. Let (A; B) be an m \Theta n pencil in staircase form, then the staircase invariant space S of
B) and the tangent space T form an oblique decomposition of m \Theta n pencil space, i.e. R
Proof:
The proof of the theorem is similar to that of Theorem 1; first we prove the dimension of S(A; B) is the
same as the codimension of T (A; B), then we prove induction. The readers may try to
fill out the details.
Definition 7. Suppose (A; B) is a pencil. We call O b (A; B) j fP is a block
anti-symmetric matrix row conforming to (A; B); is a block anti-symmetric matrix column
conforming to (A; B) the block orthogonal-orbit of a pencil (A; B). We call T b j
X is a block anti-symmetric matrix row conforming to (A; B), Y is a block anti-symmetric matrix column
conforming to (A; B)g the block tangent space of the block orthogonal-orbit O b (A; B) at (A; B). We
call R j U is a block upper triangular matrix row conforming to (A; B), V is a
block upper triangular matrix column conforming to (A; B)g the block upper pencil space of (A; B).
Theorem 9. Let (A; B) be an m \Theta n pencil in staircase form, then the tangent space T of the orbit
O(A; B) can be split into the block tangent space T b of the orbit O b (A; B) and the block upper pencil
space R, i.e.
Proof:
This can be proved by a very similar argument concerning the dimensions as for matrix, in which the
dimension of R is 2
the dimension of T b is
the codimension
of the orbit O(A; B) (or T ) is
Corollary 2. R
6.2. Staircase as a Versal Deformation for pencils. The theory of versal forms for pencils
[17] is similar to the one for matrices. A deformation of a pencil (A; B) is a pencil (A; B)() with
entries power series in the real variables i . We say that two deformations (A; B)() and (C; D)() are
equivalent if there exist two deformations P () and Q() of identity matrices such that (A;
Theorem 10. Suppose (A; B) is in staircase form. Fix S i 2
dim(S). It follows that
is a versal deformation of every particular (A; B)() for small enough. (A; B)() is miniversal at
is a basis of S.
Definition 8. We say two deformations (A; B)() and (C; D)() are block orthogonally-
equivalent if there exist two block orthogonal deformations P () and Q() of the identity matrix such
that are exponentials of matrices which are
conforming to (A; B) in row and column respectively.
We say that a deformation (A; B)() is block orthogonally-versal if any other deformation
(C; D)() is block orthogonally-equivalent to the deformation (A; B)(OE()). Here, OE is a mapping holomorphic
at 0 with
Theorem 11. A deformation (A; B)() of (A; B) is block orthogonally-versal iff the mapping
(A; B)() is transversal to the block orthogonal-orbit of (A; B) at
This is the corresponding result to Theorem 4.
Since we know that T can be decomposed into T b \Phi R, we get:
Theorem 12. Suppose a pencil (A; B) is in staircase form. Fix S i 2
S and k dim(S). Fix R It follows that
is a block orthogonally-versal deformation of every particular (A; B)() for small enough. (A; B)()
is block orthogonally-miniversal at (A; B) if fS i g, fR j g are bases of S and R.
Notice that as in the matrix case, we can also extend our definitions and theorems to the general form
containing L T -blocks and non-zero eigenvalue blocks, and again, we will not specify what eigenvalues
they are and hence get into the bundle case. We only want to point out one particular example here.
If (A; B) is in the staircase form of will be a strictly upper triangular matrix with
nonzero entries on the super diagonal and B will be a triangular matrix with nonzero entries on the
diagonal except the (n will be the zero matrix and SB will be a matrix with the
only nonzero entry on its (n
7. Application to Pencil Staircase Forms. We concentrate on L \Phi J(0) structures only, since
otherwise, the staircase algorithm will separate all other structures and continue similarly after a shift
and/or transpose on that part only. As in the matrix case, the staircase algorithm basically decomposes
the perturbation pencil into three spaces T b , R, and S and kills the perturbation in T b .
Theorem 13. Suppose that (A; B) is a pencil in staircase form and E is any perturbation pencil.
The staircase algorithm (without zeroing) on (A; B)+ fflE will produce two orthogonal matrices P and Q
(depending on ffl) and the output pencil stair((A; B)+
R)+o(ffl),
B) has the sane staircase structure as
S is a staircase invariant pencil of ( "
B) and
R is in the block upper pencil space R. If singular values are zeroed out, then the algorithm further kills
S and output
We use a formula to explain the statement more clearly:
Similarly, we can see that when a pencil has its T and S almost normal to each other, the staircase
algorithm will behave well. On the other hand, if S is very close to T , then it will behave badly. This
is exactly the situation in the two pencil examples in Section 5. Although the two pencils are both
ill conditioned, a direct calculation shows that the first pencil has its staircase invariant space very
close to the tangent space (the angle !
while the second one does not (the angle
The if-and-only-if condition for S to be close to T is more difficult than in the matrix case. One
necessary condition is that one super diagonal block of A is almost of not full column rank or one diagonal
block of B is almost not full row rank. This is usually referred to as weak coupling.
8. Examples: The geometry of the Boley pencil and others. Boley [3, Example 2, Page
639] presents an example of a 7 \Theta 8 pencil (A; B) that is controllable (has generic Kronecker structure)
yet it is known that an uncontrollable system (non-generic Kronecker structure) is nearby at a distance
6e-4. What makes the example interesting is that the staircase algorithm fails to find this nearby
uncontrollable system while other methods succeed. Our theory provides a geometrical understanding
of why this famous example leads to staircase failure: the staircase invariant space is very close to the
tangent space.
The pencil that we refer to is (A; B(ffl)), where
and
(The dots refer to zeros, and in the original Boley example
the staircase algorithm predicts a distance of 1, and is therefore off by nearly four
orders of magnitude. To understand the failure, our theory works best for smaller values of ffl, but it is
still clear that even for there will continue to be difficulties.
It is useful to express the pencil (A; B(ffl)) as is zero except for
a "one" in the (7,7) entry of its B part. P 0 is in the bundle of pencils whose Kronecker form is L 6 +J 1 (\Delta)
and the perturbation E is exactly in the unique staircase invariant direction (hence the notation "S")
as we pointed out at the end of Section 6.
The relevant quantity is then the angle between the staircase invariant space and the pencil space.
An easy calculation reveals that the angle is very small: ' radians. In order to get a feeling
for what range of ffl first order theory applies, we calculated the exact distance d(ffl) j d(P (ffl); bundle)
using the nonlinear eigenvalue template software [33]. To first order, Figure 8.1 plots the
distances first for ffl 2 [0; 2] and then a closeup for
size of perturbation
the
distance
to
the
orbit
size of perturbation
the
distance
to
the
orbit
Fig. 8.1. The picture to explain the change of the distance of the pencils P0 + fflE to the bundle of L6
changes. The second subplot is part of the first one at the points near
Our observation based on this data suggests that first order theory is good to two decimal places
for one place for ffl 10 \Gamma2 . To understand the geometry of staircase algorithmic failure,
one decimal place or even merely an order of magnitude is quite sufficient.
In summary, we see clearly that the staircase invariant direction is at a small angle to the tangent
space, and therefore the staircase algorithm will have difficulty finding the nearest pencil on the bundle
or predicting the distance. This difficulty is quantified by the angle ' S .
Since the Boley example is for we computed the distance well past 1. The breakdown of
first order theory is attributed to the curving of the bundle towards S. A three dimensional schematic
is portrayed in Figure 8.2.
Fig. 8.2. The staircase algorithm on the Boley example. The surface represents the orbit O(P0 ). Its tangent space
at the pencil P0 , T (P0 ), is represented by the plane on the bottom. P1 lies on the staircase invariant space S inside the
"bowl". The hyperplane of uncontrollable pencils is represented by the plane cutting through the surface along the curve
C. It intersects T (P0 ) along L. The angle between L and S is ' c . The angle between S and T (P0 ), ' S , is represented by
the angle "HP0P1 .
The relevant picture for control theory is a planar intersection of the above picture. In control theory,
we set the special requirement that the "A" matrix has the form [0 I]. Pencils on the intersection of
this hyperplane and the bundle are termed "uncontrollable."
We analytically calculated the angle ' c between S and the tangent space for the "uncontrollable
We found that ' 0:0040. Using the nonlinear eigenvalue template software [33], we
numerically computed the true distance from fflE to the "uncontrollable surfaces" and calculated
the ratio of this distance to ffl, we found that for ffl ! 8e \Gamma 4, the ratio agrees with ' very well.
We did a similar analysis on the three pencils C 1 , C 2 C 3 given by J. Demmel and B. Kagstrom [12].
We found that the sin values of the angles between S and T are respectively 2.4325e-02, 3.4198e-02
and 8.8139e-03, and the sin values between T b and R are respectively 1.7957e-02 7.3751e-03 and
3.3320e-06. This explains why we saw the staircase algorithm behave progressively worse on them.
Especially, it explains why when a perturbation about 10 \Gamma3 is added to these pencils, C 3 behaves
dramatically worse then C 1 and C 2 . The component in S is almost of the same order as the entries of
the original pencil.
So we conclude that the reason the staircase algorithm does not work well on this example is because
actually a staircase failure, in that its tangent space is very close to its staircase
invariant space and also the perturbation is so large that even if we know the angle in advance we can
not estimate the distance well.
Acknowledgement
. The authors thank Bo Kagstrom and Erik Elmroth for their helpful discussion
and their conlab software for easy interactive numerical testing. The staircase invariant directions were
originally discovered for single Jordan blocks with Erik Elmroth while he was visiting MIT during the
fall of 1996.
--R
On matrices depending on parameters.
An improved algorithm for the computation of Kronecker's canonical form of a singular pencil.
Estimating the sensitivity of the algebraic structure of pencils with simple eigenvalue estimates.
The algebraic structure of pencils and block Toeplitz matrices.
Placing zeroes and the Kronecker canonical form.
Lectures on Finite Precision Computations.
The dimension of matrices (matrix pencils) with given Jordan (Kronecker) canonical forms.
Stably computing the Kronecker structure and reducing subspace of singular pencils A
Computing stable eigendecompositions of matrix pencils.
The generalized Schur decomposition of an arbitrary pencil A
The generalized Schur decomposition of an arbitrary pencil A
Accurate solutions of ill-posed problems in control theory
The computation of Kronecker's canonical form of a singular pencil.
The generalized eigenstructure problem in linear system theory.
Reducing subspaces: definitions
Oral communication.
A geometric approach to perturbation theory of matrices and matrix pencils: Part 1: versal deformations.
A geometric approach to perturbation theory of matrices and matrix pencils: Part 2: stratification-enhanced staircase algorithm
The set of 2-by-3 matrix pencils - Kronecker structures and their transitions under perturbations
Computation of zeros of linear multivariable systems.
The application of singularity theory to the computation of Jordan Canonical Form.
Matrix Computations.
Differential Geometry
The generalized singular value decomposition and the general A
ALGORITHM 560: JNF
An algorithm for numerical computation of the Jordan normal form of a complex matrix.
Matrix Pencils
Robust pole assignment in linear state feedback.
On a method of solving the complete eigenvalue problem of a degenerate matrix.
Nonlinear eigenvalue problems.
An algorithm for numerical determination of the structure of a general matrix.
Computing the distance to an uncontrollable system.
--TR
--CTR
Naren Ramakrishnan , Chris Bailey-Kellogg, Sampling Strategies for Mining in Data-Scarce Domains, Computing in Science and Engineering, v.4 n.4, p.31-43, July 2002 | kronecker structure;jordan structure;SVD;staircase algorithm;versal deformation |
347613 | Structure in Approximation Classes. | The study of the approximability properties of NP-hard optimization problems has recently made great advances mainly due to the results obtained in the field of proof checking. The last important breakthrough proves the APX-completeness of several important optimization problems and thus reconciles "two distinct views of approximation classes: syntactic and computational" [S. Khanna et al., in Proc. 35th IEEE Symp. on Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA, 1994, pp. 819--830]. In this paper we obtain new results on the structure of several computationally-defined approximation classes. In particular, after defining a new approximation preserving reducibility to be used for as many approximation classes as possible, we give the first examples of natural NPO-complete problems and the first examples of natural APX-intermediate problems. Moreover, we state new connections between the approximability properties and the query complexity of NPO problems. | Introduction
In his pioneering paper on the approximation of combinatorial optimization problems [20],
David Johnson formally introduced the notion of approximable problem, proposed approximation
algorithms for several problems, and suggested a possible classification of optimization
problems on grounds of their approximability properties. Since then it was clear that, even
though the decision versions of most NP-hard optimization problems are many-one polynomial-time
reducible to each other, they do not share the same approximability properties. The main
reason of this fact is that many-one reductions not always preserve the objective function and,
even if this happens, they rarely preserve the quality of the solutions. It is then clear that a
stronger kind of reducibility has to be used. Indeed, an approximation preserving reduction not
only has to map instances of a problem A to instances of a problem B, but it also has to be
An extended abstract of this paper has been presented at the 1st Annual International Computing and
Combinatorics Conference.
able to come back from "good" solutions for B to "good" solutions for A. Surprisingly, the first
definition of this kind of reducibility [33] was given as long as 13 years after Johnson's paper
and, after that, at least seven different approximation preserving reducibilities appeared in the
literature (see Fig. 1). These reducibilities are identical with respect to the overall scheme but
differ essentially in the way they preserve approximability: they range from the Strict reducibility
in which the error cannot increase to the PTAS-reducibility in which there are basically no
restrictions (see also Chapter 3 of [23]).
PTAS-reducibility [14]
P-reducibility [33]L-reducibility [36] E-reducibility [26]
Strict reducibility [33]
\Phi \Phi \Phi \Phi*
\Phi \Phi \Phi \Phi*
HYH
HY
Continuous reducibility [39]
A-reducibility [33]
Figure
1. The taxonomy of approximation preserving reducibilities
By means of these reducibilities, several notions of completeness in approximation classes
have been introduced and, basically, two different approaches were followed. On the one hand,
the attention was focused on computationally defined classes of problems, such as NPO (i.e.,
the class of optimization problems whose underlying decision problem is in NP) and APX (i.e.,
the class of constant-factor approximable NPO problems): along this line of research, however,
almost all completeness results dealt either with artificial optimization problems or with problems
for which lower bounds on the quality of the approximation were easily obtainable [12, 33].
On the other hand, researchers focused on the logical definability of optimization problems and
introduced several syntactically defined classes for which natural completeness results were obtained
[27, 34, 36]: unfortunately, the approximability properties of the problems in these latter
classes were not related to standard complexity-theoretic conjectures. A first step towards the
reconciling of these two approaches consisted of proving lower bounds (modulo P 6= NP or some
other likely condition) on the approximability of complete problems for syntactically defined
classes [1, 31]. More recently, another step has been performed since the closure of syntactically
defined classes with respect to an approximation preserving reducibility has been proved to be
equal to the more familiar computationally defined classes [26].
In spite of this important achievement, beyond APX we are still forced to distinguish between
maximization and minimization problems as long as we are interested in completeness
proofs. Indeed, a result of [27] states that it is not possible to rewrite every NP maximization
problem as an NP minimization problem unless NP=co-NP. A natural question is thus whether
this duality extends to approximation preserving reductions.
Finally, even though the existence of "intermediate" artificial problems, that is, problems
for which lower bounds on their approximation are not obtainable by completeness results
was proved in [12], a natural question arises: do natural intermediate problems exist? Observe
that this question is also open in the field of decision problems: for example, it is known that
the graph isomorphism problem cannot be NP-complete unless the polynomial-time hierarchy
collapses [38], but no result has ever been obtained giving evidence that the problem does not
belong to P.
The first goal of this paper is to define an approximation preserving reducibility that can
be used for as many approximation classes as possible and such that all reductions that have
appeared in the literature still hold. In spite of the fact that the L-reducibility has been the
most widely used so far, we will give strong evidence that it cannot be used to obtain completeness
results in "computationally defined" classes such as APX, log-APX (that is, the class
of problems approximable within a logarithmic factor), and poly-APX (that is, the class of
problems approximable within a polynomial factor). Indeed, on the one hand in [14] it has
been shown that the L-reducibility is too strict and does not allow to reduce some problems
which are known to be easy to approximate to problems which are known to be hard to
approximate. On the other hand in this paper we show that it is too weak and is not approximation
preserving (unless co-NP). The weakness of the L-reducibility is, essentially,
shared by all reducibilities of Fig. 1 but the Strict reducibility and the E-reducibility, while
the strictness of the L-reducibility is shared by all of them (unless P NP ' P NP[O(logn)] ) but the
PTAS-reducibility. The reducibility we propose is a combination of the E-reducibility and of the
PTAS-reducibility and, as far as we know, it is the strictest reducibility that allows to obtain all
approximation completeness results that have appeared in the literature, such as, for example,
the APX-completeness of Maximum Satisfiability [14, 26] and the poly-APX-completeness
of Maximum Clique [26].
The second group of results refers to the existence of natural complete problems for NPO.
Indeed, both [33] and [12] provide examples of natural complete problems for the class of
minimization and maximization NP problems, respectively. In Sect. 3 we will show the existence
of both maximization and minimization NPO-complete natural problems. In particular, we prove
that Maximum Programming and Minimum Programming are NPO-complete.
This result shows that making use of a natural approximation preserving reducibility is enough
powerful to encompass the "duality" problem raised in [27] (indeed, in [26] it was shown
that this duality does not arise in APX, log-APX, poly-APX, and other subclasses of NPO).
Moreover, the same result can also be obtained when restricting ourselves to the class NPO PB
(i.e., the class of polynomially bounded NPO problems). In particular, we prove that Maximum
Programming and Minimum PB Programming are NPO PB-complete.
The third group of results refers to the existence of natural APX-intermediate problems.
In Sect. 4, we will prove that Minimum Bin Packing (and other natural NPO problems)
cannot be APX-complete unless the polynomial-time hierarchy collapses. Since it is well-known
[32] that this problem belongs to APX and that it does not belong to PTAS (that is, the
class of NPO problems with polynomial-time approximation schemes) unless P=NP, our result
yields the first example of a natural APX-intermediate problem (under a natural complexity-theoretic
conjecture). Roughly speaking, the proof of our result is structured into two main
steps. In the first step, we show that if Minimum Bin Packing were APX-complete then the
problem of answering any set of k non-adaptive queries to an NP-complete problem could be
reduced to the problem of approximating an instance of Minimum Bin Packing within a ratio
depending on k. In the second step, we show that the problem of approximating an instance
of Minimum Bin Packing within a given performance ratio can be solved in polynomial-time
by means of a constant number of non-adaptive queries to an NP-complete problem. These
two steps will imply the collapse of the query hierarchy which in turn implies the collapse of
the polynomial-time hierarchy. As a side effect of our proof, we will show that if a problem is
APX-complete, then it does not admit an asymptotic approximation scheme.
The previous results are consequences of new connections between the approximability properties
and the query complexity of NP-hard optimization problems. In several recent papers
the notion of query complexity (that is, the number of queries to an NP oracle needed to solve
a given problem) has been shown to be a very useful tool for understanding the complexity of
approximation problems. In [7, 9] upper and lower bounds have been proved on the number
of queries needed to approximate certain optimization problems (such as Maximum Satisfiability
and Maximum Clique): these results deal with the complexity of approximating the
value of the optimum solution and not with the complexity of computing approximate solu-
tions. In this paper, instead, the complexity of "constructive" approximation will be addressed
by considering the languages that can be recognized by polynomial-time machines which have
a function oracle that solves the approximation problem. In particular, after proving the existence
of natural APX-intermediate problems, in Sect. 4.1 we will be able to solve an open
question of [7] proving that finding the vertices of the largest clique is more difficult than merely
finding the vertices of a 2-approximate clique unless the polynomial-time hierarchy collapses.
The results of [7, 9] show that the query complexity is a good measure to study approximability
properties of optimization problems. The last group of our results show that completeness
in approximation classes implies lower bounds on the query complexity. Indeed, in Sect. 5 we
show that the two approaches are basically equivalent by giving sufficient and necessary conditions
for approximation completeness in terms of query-complexity hardness and combinatorial
properties. The importance of these results is twofold: they give new insights into the structure
of complete problems for approximation classes and they reconcile the approach based
on standard computation models with the approach based on the computation model for approximation
proposed in [8]. As a final observation, our results can be seen as extensions of a
result of [26] in which general sufficient (but not necessary) conditions for APX-completeness
are proved.
1.1. Preliminaries
We assume the reader to be familiar with the basic concepts of computational complexity
theory. For the definitions of most of the complexity classes used in the paper we refer the
reader to one of the books on the subject (see, for example, [2, 5, 16, 35]).
We now give some standard definitions in the field of optimization and approximation theory.
Definition 1. An NP optimization problem A is a fourtuple (I; sol; m; type) such that
1. I is the set of the instances of A and it is recognizable in polynomial time.
2. Given an instance x of I, sol(x) denotes the set of feasible solutions of x. These solutions are
short, that is, a polynomial p exists such that, for any y 2 sol(x), jyj - p(jxj). Moreover, for
any x and for any y with jyj - p(jxj), it is decidable in polynomial time whether y 2 sol(x).
3. Given an instance x and a feasible solution y of x, m(x; y) denotes the positive integer
measure of y (often also called the value of y). The function m is computable in polynomial
time and is also called the objective function.
4. type 2 fmax; ming.
The goal of an NP optimization problem with respect to an instance x is to find an optimum
solution, that is, a feasible solution y such that m(x; sol(x)g. In the
following opt will denote the function mapping an instance x to the measure of an optimum
solution.
The class NPO is the set of all NP optimization problems. Max NPO is the set of maximization
NPO problems and Min NPO is the set of minimization NPO problems.
An NPO problem is said to be polynomially bounded if a polynomial q exists such that, for
any instance x and for any solution y of x, m(x; y) - q(jxj). The class NPO PB is the set of
all polynomially bounded NPO problems. Max PB is the set of all maximization problems in
NPO PB and Min PB is the set of all minimization problems in NPO PB.
Definition 2. Let A be an NPO problem. Given an instance x and a feasible solution y of x,
we define the performance ratio of y with respect to x as
ae m(x; y)
oe
and the relative error of y with respect to x as
The performance ratio (respectively, relative error) is always a number greater than or equal
to 1 (respectively, 0) and is as close to 1 (respectively, 0) as the value of the feasible solution
is close to the optimum value. It is easy to see that, for any instance x and for any feasible
solution y of x,
Definition 3. Let A be an NPO problem and let T be an algorithm that, for any instance
x of A such that sol(x) 6= ;, returns a feasible solution T (x) in polynomial time. Given an
arbitrary function r : N ! [1; 1), we say that T is an r(n)-approximate algorithm for A if the
performance ratio of the feasible solution T (x) with respect to x verifies the following inequality:
Definition 4. Given a class of functions F , an NPO problem A belongs to the class F-APX
if an r(n)-approximate algorithm T for A exists, for some function r 2 F .
In particular, APX, log-APX, poly-APX, and exp-APX will denote the classes F-APX with
F equal to the set O(1), to the set O(log n), to the set O(n O(1) ), and to the set O(2 n O(1)
respectively. One could object that there is no difference between NPO and exp-APX since the
polynomial bound on the computation time of the objective function implies that any NPO
problem is h2 n k -approximable for some h and k. This is not true, since NPO problems exist
for which it is even hard to find a feasible solution. We will see examples of such problems in
Sect. 3 (e.g. Maximum Weighted Satisfiability).
Definition 5. An NPO problem A belongs to the class PTAS if an algorithm T exists such
that, for any fixed rational r ? 1, T (\Delta; r) is an r-approximate algorithm for A.
Clearly, the following inclusions hold:
It is also easy to see that these inclusions are strict if and only if P 6= NP.
1.2. A list of NPO problems
We here define the NP optimization problems that will be used in the paper. For a much larger
list of NPO problems we refer to [11].
Maximum Clique
Instance: Graph E).
Solution: A clique in G, i.e. a subset V 0 ' V such that every two vertices in V 0 are joined
by an edge in E.
Measure: Cardinality of the clique, i.e., jV 0 j.
Maximum Weighted Satisfiability and Minimum Weighted Satisfiability
Instance: Set of variables X, boolean quantifier-free first-order formula OE over the variables
in X, and a weight function
Solution: Truth assignment that satisfies OE.
Measure: The sum of the weights of the true variables.
Maximum Programming and Minimum PB Programming
Instance: Integer m \Theta n-matrix A, integer m-vector b, binary n-vector c.
Solution: A binary n-vector x such that Ax - b.
Measure:
Maximum Satisfiability
Instance: Set of variables X and Boolean CNF formula OE over the variables in X.
Solution: Truth assignment to the variables in X.
Measure: The number of satisfied clauses.
Minimum Bin Packing
Instance: Finite set U of items, and a size s(u) 2 Q " (0; 1] for each u 2 U .
Solution: A partition of U into disjoint sets U Um such that the sum of the sizes of
the items in each U i is at most 1.
Measure: The number of used bins, i.e., the number m of disjoint sets.
Minimum Ordered Bin Packing
Instance: Finite set U of items, a size s(u) 2 Q " (0; 1] for each u 2 U , and a partial order -
on U .
Solution: A partition of U into disjoint sets U Um such that the sum of the sizes of
the items in each U i is at most 1 and if u 2 U i and u
Measure: The number of used bins, i.e., the number m of disjoint sets.
Minimum Degree Spanning Tree
Instance: Graph E).
Solution: A spanning tree for G.
Measure: The maximum degree of the spanning tree.
Minimum Edge Coloring
Instance: Graph E).
Solution: A coloring of E, i.e., a partition of E into disjoint sets
no two edges in E i share a common endpoint in G.
Measure: Cardinality of the coloring, i.e., the number k of disjoint sets.
2. A new approximation preserving reducibility
The goal of this section is to define a new approximation preserving reducibility that can be
used for as many approximation classes as possible and such that all reductions that have
appeared in the literature still hold. We will justify the definition of this new reducibility by
emphasizing the disadvantages of previously known ones. In the following, we will assume that,
for any reducibility, an instance x such that sol(x) 6= ; is mapped into an instance x 0 such that
2.1. The L-reducibility
The first reducibility we shall consider is the L-reducibility (for linear reducibility) [36] which is
often most practical to use in order to show that a problem is at least as hard to approximate
as another.
Definition 6. Let A and B be two NPO problems. A is said to be L-reducible to B, in symbols
A - L B, if two functions f and g and two positive constants ff and fi exist such that:
1. For any x 2 I A , f(x) 2 I B is computable in polynomial time.
2. For any x 2 I A and for any y 2 sol B (f(x)), g(x; y) 2 sol A (x) is computable in polynomial
time.
3. For any x 2 I A , opt B (f(x)) - ffopt A (x).
4. For any x 2 I A and for any y 2 sol B (f(x)),
jopt A
The fourtuple (f; g; ff; fi) is said to be an L-reduction from A to B.
Clearly, the L-reducibility preserves membership in PTAS. Indeed, if (f; g; ff; fi) is an L-
reduction from A to B then, for any x 2 I A and for any y 2 sol B (f(x)), we have that
so that if B 2 PTAS then A 2 PTAS [36]. The above inequality also implies that if A is a
minimization problem and an r-approximate algorithm for B exists, then a (1
approximate algorithm for A exists. In other words, L-reductions from minimization problems
to optimization problems preserve membership in APX. The next result gives a strong evidence
that, in general, this is not true whenever the starting problem is a maximization one.
Theorem 1. The following statements are equivalent:
1. Two problems A 2 Max NPO and B 2 Min NPO exist such that A 62 APX, B 2 APX, and
A - L B.
2. Two Max NPO problems A and B exist such that A 62 APX, B 2 APX, and A - L B.
3. A polynomial-time recognizable set of satisfiable Boolean formulas exists for which no
polynomial-time algorithm can compute a satisfying assignment for each of them.
Proof. (1) ) (2). In this case, it suffices to L-reduce B to a maximization problem C in APX
[26].
Assume that for any polynomial-time recognizable set of satisfiable Boolean
formulas there is a polynomial-time algorithm computing a satisfying assignment for each
formula in the set. Suppose that (f; g; ff; fi) is an L-reduction from a maximization problem
A to a maximization problem B and that B is r-approximable for some r ? 1. Let x be an
instance of A and let y be a solution of f(x) such that opt B (f(x))=mB (f(x); y) - r. For the
sake of convenience, let opt
we have that m x - opt A . We now show that opt A =m x
non-constructive approximation of opt A . Let
. There are two cases.
1. opt B - flopt A . By the definition of the L-reducibility, opt A \Gamma mA - fi(opt
we have that
opt A
Hence,
opt A
r
where the last equality is due to the definition of fl.
2. opt B ? flopt A . It holds that
opt A
Let us now consider the following non-deterministic polynomial-time algorithm.
begin finput: x 2 I A g
compute m x by using the r-approximate algorithm for B and the L-reduction from A to B;
guess y 2 sol A (x);
if mA (x; y) - m x then accept else reject;
By applying Cook's reduction [10] to the above algorithm, it easily follows that, for any
I A , a satisfiable Boolean formula OE x can be constructed in polynomial time in the length
of x so that any satisfying assignment for OE x encodes a solution of x whose measure is at least
Moreover, the set fOE x is recognizable in polynomial time. By assumption, it
is then possible to compute in polynomial time a satisfying assignment for OE x and thus an
approximate solution for x.
Assume that a polynomial-time recognizable set S of satisfiable Boolean formulas
exists for which no polynomial-time algorithm can compute a satisfying assignment
for each of them. Consider the following two NPO problems
fy : y is a truth assignment
to the variables of xg,
jxj if y is a satisfying assignment for x,
and
jxj if y is a satisfying assignment for x,
2jxj otherwise.
Clearly, problem B is in APX, while if A is in APX then there is a polynomial-time algorithm
that computes a satisfying assignment for each formula in S, contradicting the assumption.
Moreover, it is easy to see that A L-reduces to B via f j -x:x, g j -x-y:y,
Observe that in [30] it is shown that the third statement of the above theorem holds if
and only if the fl-reducibility is different from the many-one reducibility. Moreover, in [19]
it is shown that the latter hypothesis is somewhat intermediate between P
and P 6= NP. In other words, there is strong evidence that, even though the L-reducibility is
suitable for proving completeness results within classes contained in APX (such as Max SNP
[36]), this reducibility cannot be used to define the notion of completeness for classes beyond
APX. Moreover, it cannot be blindly used to obtain positive results, that is, to prove the
existence of approximation algorithms via reductions. Finally, it is possible to L-reduce the
maximization problem B defined in the last part of the proof of the previous theorem to
Maximum 3-Satisfiability: this implies that the closure of Max SNP with respect to the
L-reducibility is not included in APX, contrary to what is commonly believed (e.g. see [35],
page 314).
2.2. The E-reducibility
The drawbacks of the L-reducibility are mainly due to the fact that the relation between the
performance ratios is set by two separate linear constraints on both the optimum values and
the absolute errors. The E-reducibility (for error reducibility) [26], instead, imposes a linear
relation directly between the performance ratios.
Definition 7. Let A and B be two NPO problems. A is said to be E-reducible to B, in symbols
A -E B, if two functions f and g and a positive constant ff exist such that:
1. For any x 2 I A , f(x) 2 I B is computable in polynomial time.
2. For any x 2 I A and for any y 2 sol B (f(x)), g(x; y) 2 sol A (x) is computable in polynomial
time.
3. For any x 2 I A and for any y 2 sol B (f(x)),
The triple (f; g; ff) is said to be an E-reduction from A to B.
Observe that, for any function r, an E-reduction maps r(n)-approximate solutions into
solutions where h is a constant depending only on the reduction.
Hence, the E-reducibility not only preserves membership in PTAS but also membership in exp-
APX, poly-APX, log-APX, and APX. As a consequence of this observation and of the results
of the previous section, we have that NPO problems should exist which are L-reducible to each
other but not E-reducible. However, the following result shows that within the class APX the
E-reducibility is just a generalization of the L-reducibility.
Proposition 1. For any two NPO problems A and B, if A - L B and A 2 APX, then A -E B.
Proof. Let T be an r-approximate algorithm for A with r constant and let (f be an
L-reduction from A to B. Then, for any x 2 I A and for any y 2 sol B (f L (x)), EA (x; g L (x;
ff L fi LEB (f L (x); y). If A is a minimization problem then, for any x 2 I A and for any y 2
and thus (f is an E-reduction from A to B. Otherwise (that is, A is a maximization
problem) we distinguish the following two cases.
1. EB (f L (x); y) - 1
: in this case we have that
2. EB (f L
: in this case we have that RB (f L (x);
so that
where the first inequality is due to the fact that T is an r-approximation algorithm for A.
We can thus define a triple (f
1. For any x 2 I A , f
2. For any x 2 I A and for any y 2 sol B (f E (x)),
3. 1)g.
From the above discussion it follows that (f is an E-reduction from A to B. ut
Clearly, the converse of the above result does not hold since no problem in NPO \Gamma NPO PB
can be L-reduced to a problem in NPO PB while any problem in PO can be E-reduced to any
NPO problem. Moreover, in [26] it is shown that Maximum 3-Satisfiability is (NPO PB "
APX)-complete with respect to the E-reducibility. This result is not obtainable by means of
the L-reducibility: indeed, it is easy to prove that Minimum Bin Packing is not L-reducible
to Maximum 3-Satisfiability unless (see, for example, [6]).
The E-reducibility is still somewhat too strict. Indeed, in [14] it has been shown that natural
PTAS problems exist, such as Maximum Knapsack, which are not E-reducible to polynomially
bounded APX problems, such as Maximum 3-Satisfiability (unless a logarithmic
number of queries to an NP oracle is as powerful as a polynomial number of queries).
2.3. The AP-reducibility
The above mentioned drawback of the E-reducibility is mainly due to the fact that an E-
reduction preserves optimum values (see [14]). Indeed, the linear relation between the performance
ratios seems to be too restrictive. According to the definition of approximation preserving
reducibilities given in [12], we could overcome this problem by expressing this relation by means
of an implication. However, this is not sufficient: intuitively, since the function g does not know
which approximation is required, it must still map optimum solutions into optimum solutions.
The final step thus consists of letting the functions f and g depend on the performance ratio 1 .
This implies that different constraints have to be put on the computation time of f and g:
on the one hand, we still want to preserve membership in PTAS, on the other we want the
reduction to be efficient even when poor performance ratios are required. These constraints are
formally imposed in the following definition of approximation preserving reducibility (which is
a restriction of the PTAS-reducibility introduced in [14]).
Definition 8. Let A and B be two NPO problems. A is said to be AP-reducible to B, in
symbols A -AP B, if two functions f and g and a positive constant ff exist such that:
1. For any x 2 I A and for any r ? 1, f(x; r) 2 I B is computable in time t f (jxj; r).
2. For any x 2 I A , for any r ? 1, and for any y 2 sol B (f(x; r)), g(x;
computable in time t g (jxj; jyj; r).
3. For any fixed r, both t f (\Delta; r) and t g (\Delta; \Delta; r) are bounded by a polynomial.
4. For any fixed n, both t f (n; \Delta) and t g (n; n; \Delta) are non-increasing functions.
5. For any x 2 I A , for any r ? 1, and for any y 2 sol B (f(x; r)),
The triple (f; g; ff) is said to be an AP-reduction from A to B.
According to the above definition, functions like 2 1=(r\Gamma1) n h or n 1=(r\Gamma1) are admissible bounds
on the computation time of f and g, while this is not true for functions like n r or 2 n .
We also let the function f depend on the performance ratio because this feature will turn out to be useful
in order to prove interesting characterizations of complete problems for approximation classes.
Observe that, clearly, the AP-reducibility is a generalization of the E-reducibility. Moreover,
it is easy to see that, contrary to the E-reducibility, any PTAS problem is AP-reducible to any
NPO problem.
As far as we know, this reducibility is the strictest one appearing in the literature that allows
to obtain natural APX-completeness results (for instance, the APX-completeness of Maximum
Satisfiability [14, 26]).
3. NPO-complete problems
We will in this section prove that there are natural problems that are complete for the classes
NPO and NPO PB. Previously, completeness results have been obtained just for Max NPO,
Min NPO, Max PB, and Min PB [12, 33, 4, 24]. One example of such a result is the following
theorem.
Theorem 2. Minimum Weighted Satisfiability is Min NPO-complete and Maximum
Weighted Satisfiability is Max NPO-complete, even if only a subset fv of the
variables has nonzero weight w(v i any truth assignment satisfying the instance
gives the value true to at least one v i .
We will construct AP-reductions from maximization problems to minimization problems
and vice versa. Using these reductions we will show that a problem that is Max NPO-complete
or Min NPO-complete in fact is complete for the whole of NPO, and that a problem that is
Max PB-complete or Min PB-complete is complete for the whole of NPO PB.
Theorem 3. Minimum Weighted Satisfiability and Maximum Weighted Satisfiability
are NPO-complete.
Proof. In order to establish the NPO-completeness of Minimum Weighted Satisfiability
we just have to show that there is an AP-reduction from a Max NPO-complete problem to
Minimum Weighted Satisfiability. As the Max NPO-complete problem we will use the
restricted version of Maximum Weighted Satisfiability from Theorem 2.
Let x be an instance of Maximum Weighted Satisfiability, i.e. a formula OE over variables
some variables with weight zero. We will first
give a simple reduction that preserves the approximability within the factor 2, and then adjust
it to obtain an AP-reduction.
Let f(x) be the formula OE - ff is the conjunctive normal form of
are new variables with weights w(z i
where all other variables (even the v-variables) have zero weight. If y is a satisfying assignment
of f(x), let g(x; y) be the restriction of the assignment to the variables that occur in OE. This
assignment clearly satisfies OE.
Note that exactly one of the z-variables is true in any satisfying assignment of f(x). Indeed,
if all z-variables were false, then all v-variables would be false and OE would not be satisfied. On
the other hand, if both z i and z j were true would be both true and false
which is a contradiction. Hence,
In particular this holds for the optimum solution. Thus the performance ratio for Maximum
Weighted Satisfiability is
which means that the reduction preserves the approximability within 2.
Let us now extend the construction in order to obtain R(x; g(x;
for every nonnegative integer k. The reduction described above corresponds to
For any i 2 and for any (b
have a variable z i;b 1 ;:::;b k(i)
. Let
i2f1;:::;sg
(b1 ;:::;b k(i) )2f0;1g k(i)
where ff i;b 1 ;:::;b k(i)
is the conjuctive normal form of
z
as above. Finally, define
(by choosing K greater than 2 k we can disregard the effect of the ceiling operation in the
following computations).
As in the previous reduction exactly one of the z-variables is true in any satisfying assignment
of f k (x). If, in a solution y of f k (x), z i;b 1 ;:::;b
) and we know that
On the other hand, if
s
In both cases, we thus get
and therefore R(x; g(x; Given any r ? 1, if we choose k such that
1). This is obviously an AP-reduction
with 2.
A very similar proof can be used to show that Maximum Weighted Satisfiability is
NPO-complete. ut
Corollary 1. Any Min NPO-complete problem is NPO-complete and any Max NPO-complete
problem is NPO-complete.
As an application of the above corollary, we have that the Minimum Programming
problem is NPO-complete.
We can also show that there are natural complete problems for the class of polynomially
bounded NPO problems.
Theorem 4. Maximum PB Programming and Minimum PB Programming
are NPO PB-complete.
Proof. Maximum Programming is known to be Max PB-complete [4] and Minimum
Programming is known to be Min PB-complete [24]. Thus we just have to show
that there are AP-reductions from Minimum PB Programming to Maximum PB
Programming and from Maximum PB Programming to Minimum PB
Programming.
Both reductions use exactly the same construction. Given a satisfying variable assignment,
we define the one-variables to be the variables occurring in the objective function that have
the value one. The objective value is the number of one-variables plus 1.
The objective value of a solution is encoded by introducing an order of the one-variables.
The order is encoded by a squared number of Fig. 2. The idea is to invert
the objective values, so that a solution without one-variables corresponds to an objective value
of n of the constructed problem, and, in general, a solution with p one-variables corresponds
to an objective value of
size of
solution
one 1 in each row?
only zeros in upper part
Figure
2. The idea of the reduction from Minimum/Maximum PB Programming to Maxi-
mum/Minimum Programming. The variable x j
only if v i is the jth one-variable
in the solution. There is at most one 1 in each column and in each row.
The reductions are constructed as follows. Given an instance of Minimum PB Programming
or Maximum PB i.e. an objective function 1+
some inequalities over variables
and the following inequalities:
most one 1 in each column) (1)
most one 1 in each row) (2)
(only zeros in upper part) (3)
Besides these inequalities we include all inequalities from the original problem, but we substitute
each variable v i with the sum
. The variables in U (that do not occur in the objective
are left intact.
The objective function is defined as
In order to express the objective function with only binary coefficients we have to introduce n
new variables y
1)c. The objective function then is
One can now verify that a solution
of the original problem instance with s one-variables (i.e. with an objective value of s
will exactly correspond to a solution of the constructed problem instance with objective value
vice versa.
Suppose that the optimum solution to the original problem instance has M one-variables,
then the performance ratio (s correspond to the performance ratio
for the constructed problem, where m
n is the relative error due to the floor operation. By
choosing n large enough the relative error can be made arbitrarily small. Thus it is easy to see
that the reduction is an AP-reduction. ut
Corollary 2. Any Min PB-complete problem is NPO PB-complete and any Max PB-complete
problem is NPO PB-complete.
4. Query complexity and APX-intermediate problems
The existence of APX-intermediate problems (that is, problems in APX which are not APX-
complete) has already been shown in [12] where an artificial such problem is obtained by
diagonalization techniques similar to those developed to prove the existence of NP-intermediate
problems [29]. In this section, we prove that "natural" APX-intermediate problems exist: for
instance, we will show that Minimum Bin Packing is APX-intermediate. In order to prove
this result, we will establish new connections between the approximability properties and the
query complexity of NP-hard optimization problems. To this aim, let us first recall the following
definition.
Definition 9. A language L belongs to the class P NP[f(n)] if it is decidable by a polynomial-time
oracle Turing machine which asks at most f(n) queries to an NP-complete oracle, where
n is the input size. The class QH is equal to the union
Similarly, we can define the class of functions FP NP[f(n)] [28]. The following result has been
proved in [21, 22].
Theorem 5. If a constant k exists such that
then the polynomial-time hierarchy collapses.
The query-complexity of the "non-constructive" approximation of several NP-hard optimization
problems has been studied by using hardness results with respect to classes of functions
FP NP[\Delta] [7, 9]. However, this approach cannot be applied to analyze the complexity of
"constructing" approximate solutions. To overcome this limitation, we use a novel approach
that basically consists of considering how helpful is an approximation algorithm for a given
optimization problem to solve decision problems.
Definition 10. Given an NPO problem A and a rational r - 1, A r is a multi-valued partial
function that, given an instance x of A, returns the set of feasible solutions y of x such that
Definition 11. Given an NPO problem A and a rational r - 1, a language L belongs to P A r
if two polynomial-time computable functions f and g exist such that, for any x, f(x) is an
instance of A with sol(f(x)) 6= ;, and, for any y 2 A r (f(x)), g(x; only if x 2 L.
The class AQH(A) is equal to the union
The following result states that an approximation problem does not help more than a
constant number of queries to an NP-complete problem. It is worth observing that, in general,
an approximate solution, even though not very helpful, requires more than a logarithmic number
of queries to be computed [8].
Proposition 2. For any problem A in APX, AQH(A) ' QH.
Proof. Assume that A is a maximization problem (the proof for minimization problems is
similar). Let T be an r-approximate algorithm for A, for some r ? 1, and let L 2 P A ae for some
ae ? 1. Two polynomial-time computable functions f and g then exist witnessing this latter
fact. For any x, let rm. We can then partition
the interval [m; rm] into blog ae rc
and start looking for the subinterval containing the optimum value (a similar technique has
been used in [7, 9]). This can clearly be done using blog ae rc queries to an NP-complete
oracle. One more query is sufficient to know whether a feasible solution y exists whose value
lies in that interval and such that g(x; y) = 1. Since y is ae-approximate, it follows that L can
be decided using blog ae rc queries, that is, L 2 QH. ut
Recall that an NPO problem admits an asymptotic polynomial-time approximation scheme
if an algorithm T exists such that, for any x and for any r ? 1, R(x; T (x;
with k constant and the time complexity of T (x; r) is polynomial with respect to jxj. The
class of problems that admit an asymptotic polynomial-time approximation scheme is usually
denoted by PTAS 1 . The following result shows that, for this class, the previous fact can be
strengthened.
Proposition 3. Let A 2 PTAS 1 . Then a constant h exists such that
Proof. Let A be a minimization problem in PTAS 1 (the proof for maximization problem is
very similar). By definition, a constant k and an algorithm T exist such that, for any instance
x and for any rational r ? 1,
We will now prove that a constant h exists such that, for any r ? 1, a function l r 2 FP NP[h\Gamma1]
exists such that, for any instance x of the problem A,
Intuitively, functions l r form a non-constructive approximation scheme that is computable by
a constant number of queries to an NP-complete oracle. Given an instance x, we can check
whether by means of a single query to an NP oracle, so that we can restrict ourselves
to instances such that sol(x) 6= ; (and thus opt(x) - 1). Note that, for these instances, T (\Delta; 2)
is a 2)-approximate algorithm for A. Let us fix an r ? 1,
and We have to distinguish two cases.
1. a - 2k(k + 2)=": in this case, opt(x) - 2k=", that is, opt(x)"=2 - k. Then
that is, y is an r-approximate solution for x, and we can set l r (in this case l r
has been computed by only one query).
2. a 2)=": in this case, opt(x)
Clearly, dlog k(k queries to NP are sufficient to find the optimum value opt(x) by
means of a binary search technique: in this case l r been computed by
queries.
Let now L be a language in AQH(A), then L 2 P A r for some r ? 1. Let f and g be the
functions witnessing that L 2 P A r . Observe that, for any x, x 2 L if and only if a solution
y for f(x) exists such that m(f(x); y) - l r (f(x)) and g(f(x); 1: that is, given l r (f(x)),
deciding whether x 2 L is an NP problem. Since l r (f(x)) is computable by means of at most
queries to NP, we have that 2. ut
The next proposition, instead, states that any language L in the query hierarchy can be
decided using just one query to A r where A is APX-complete and r depends on the level of
the query hierarchy L belongs to. In order to prove this proposition, we need the following
technical result 2 .
2 Recall that the NP-complete problem Partition is defined as follows: given a set U of items and a size
does there exists a subset U 0 ' U such that
Lemma 1. For any APX-complete problem A and for any k, two polynomial-time computable
functions f and g and a constant r exist such that, for any k-tuple of instances of
Partition, is an instance of A and if y is a solution of x whose performance
ratio is smaller than r then g(x; only if x i
is a yes-instance.
Proof. Let x be an instance of Partition for loss of generality,
we can assume that the U i s are pairwise disjoint and that, for any i,
2. Let
s; -) be an instance of Minimum Ordered Bin Packing defined as follows (a
similar construction has been used in [37]).
1.
where the v i s are new items.
2. For any u 2 U i ,
3. For any i ! j - k, for any u 2 U i , and for any u
Any solution of w must be formed by a sequence of packings of U such that, for
any i, the bins used for U i are separated by the bins used for U i+1 by means of one bin which
is completely filled by v i . In particular, the packings of the U i s in any optimum solution must
use either two or three bins: two bins are used if and only if x i is a yes-instance. The optimum
measure thus is at most 4k \Gamma 1 so that any (1 + 1=(4k))-approximate solution is an optimum
solution.
Since Minimum Ordered Bin Packing belongs to APX [41] and A is APX-complete,
then an AP-reduction (f exists from Minimum Ordered Bin Packing to A. We can
then define 1+1=(4ffk). For any r-approximate
solution y of x, the fourth property of the AP-reducibility implies that
is a (1 1=(4k))-approximate solution of w and thus an optimum solution of w. From z, we
can easily derive the right answers to the k queries x
We are now able to prove the following result.
Proposition 4. For any APX-complete problem A, QH ' AQH(A).
Proof. Let some h. It is well known (see, for instance, [3])
that L can be reduced to the problem of answering non-adaptive queries to NP.
More formally, two polynomial-time computable functions t 1 and t 2 exist such that, for any x,
are k instances of the Partition problem, and for any
(b 1g. Moreover, if, for any j, b only if x j
is a yes-instance, then t 2 only if x 2 L.
Let now f , g and r be the two functions and the constant of Lemma 1 applied to problem A
and constant k. For any x, x is an instance of A such that if y is an r-approximate
solution for only if x 2 L. Thus, L 2 P A r . ut
By combining Propositions 2 and 4, we thus have the following theorem that characterizes
the approximation query hierarchy of the hardest problems in APX.
Theorem 6. For any APX-complete problem A,
Finally, we have the following result that states the existence of natural intermediate problems
in APX.
Theorem 7. If the polynomial-time hierarchy does not collapse, then Minimum Bin Packing,
Minimum Degree Spanning Tree, and Minimum Edge Coloring are APX-intermediate.
Proof. From Proposition 3 and from the fact that Minimum Bin Packing is in PTAS 1 [25], it
follows that AQH(Minimum Bin Packing) ' P NP[h] for a given h. If Minimum Bin Packing
is APX-complete, then from Proposition 4 it follows that QH ' P NP[h] . From Theorem 5 we
thus have the collapse of the polynomial-time hierarchy. The proofs for Minimum Degree
Spanning Tree and Minimum Edge Coloring are identical and use the results of [18, 15].
ut
Observe that the previous result does not seem to be obtainable by using the hypothesis
shown by the following theorem.
Theorem 8. If Packing is APX-complete.
Proof. Assume present an AP reduction from Maximum Satisfiability
to Minimum Bin Packing. Since Turing
machine M exists that, given in input an instance OE of Maximum Satisfiability, has an
accepting computation and all accepting computations halt with an optimum solution for OE
written on the tape. Indeed, M guesses an integer k, an assigment - such that m(OE;
a proof of the fact that opt(OE) - k. From the proof of Cook's theorem it follows that, given OE, we
can find in polynomial time a formula OE 0 such that OE 0 is satisfiable and that given any satisfying
assignment for OE 0 we can find in polynomial time an optimum solution for OE. By combining
this construction with the NP-completeness proof of the Minimum Bin Packing problem, we
obtain two polynomial-time computable functions t 1 and t 2 such that, for any instance OE of
Maximum Satisfiability, t 1 is an instance of Minimum Bin Packing such that
optimum solution y of x OE , t 2 is an optimum solution of OE.
Observe that, by construction, an r-approximate solution for x OE is indeed an optimum solution
provided that r ! 3=2. Let T be a 4/3-approximate algorithm for Maximum Satisfiability
[42, 17]. The reduction from Maximum Satisfiability to Minimum Bin Packing is defined
as follows: f(OE;
It is immediate to verify that the above is an AP-reduction with
Finally, note that the above result can be extended to any APX problem which is NP-hard
to approximate within a given performance ratio.
4.1. A remark on Maximum Clique
The following lemma is the analogoue of Proposition 2 within NPO PB and can be proved
similarly by binary search techniques.
Lemma 2. For any NPO PB problem A and for any r ? 1, P A r ' P NP[log logn+O(1)] .
From this lemma, from the fact that P NP[logn] is contained in P MC 1 where MC stands for
Maximum Clique [28], and from the fact that if a constant k exists such that
then the polynomial-time hierarchy collapses [40], it follows the next result that solves an open
question posed in [7]. Informally, this result states that it is not possible to reduce the problem
of finding a maximum clique to the problem of finding a 2-approximate clique (unless the
polynomial-time hierarchy collapses).
Theorem 9. If P MC then the polynomial-time hierarchy collapses.
5. Query complexity and completeness in approximation classes
In this final section, we shall give a full characterization of problems complete for poly-APX
and for APX, respectively, in terms of hardness of the corresponding approximation problems
with respect to classes of partial multi-valued functions and in terms of suitably defined
combinatorial properties.
The classes of functions we will refer to have been introduced in [8] as follows.
Definition 12. FNP NP[q(n)] is the class of partial multi-valued functions computable by non-deterministic
polynomial-time Turing machines which ask at most q(n) queries to an NP oracle
in the entire computation tree. 3
In order to talk about hardness with respect to these classes we will use the following
reducibility which is an extension of both metric reducibility [28] and one-query reducibility
[13] and has been introduced in [8].
Definition 13. Let F and G be two partial multi-valued functions. We say that F many-one
reduces to G (in symbols, F-mvG) if two polynomial-time algorithms t 1 and t 2 exist such
that, for any x in the domain of F , t 1 (x) is in the domain of G and, for any y 2 G(t 1 (x)),
The combinatorial property used to characterize poly-APX-complete problems is the well-known
self-improvability (see, for instance, [34]).
Definition 14. A problem A is self-improvable if two algorithms t 1 and t 2 exist such that, for
any instance x of A and for any two rationals r is an instance of A
and, for any y 0 2 A r 2
(x). Moreover, for any fixed r 1 and r 2 , the
running time of t 1 and t 2 is polynomial.
We are now ready to state the first result of this section.
Theorem 10. A poly-APX problem A is poly-APX-complete if and only if it is self-improvable
and A r 0
is FNP NP[log log n+O(1)] -hard for some r 0 ? 1.
Proof. Let A be a poly-APX-complete problem. Since Maximum Clique is self-improvable
[16] and poly-APX-complete [26] and since the equivalence with respect to the AP-reducibility
preserves the self-improvability property (see [34]), we have that A is self-improvable. It is then
sufficient to prove that A 2 is hard for FNP NP[log logn+O(1)] .
From the poly-APX-completeness of A we have that Maximum Clique -AP A: let ff
be the constant of this reduction. From Theorem 12 of [8] we have that any function F in
FNP NP[log log n+O(1)] many-one reduces to Maximum Clique 1+ff . From the definition of AP-
reducibility, we also have that Maximum Clique 1+ff -mvA 2 so that F many-one reduces to
A 2 .
Conversely, let A be a poly-APX self-improvable problem such that, for some r 0 , A r 0
is FNP NP[loglogn+O(1)] -hard. We will show that, for any problem B in poly-APX, B is
AP-reducible to A. To this aim, we introduce the following partial multi-valued function
multisat: given in input a sequence (OE instances of the satisfiability problem with
and such that, for any i, if OE i+1 is satisfiable then OE i is satisfiable, a possible
output is a satisfying truth-assignment for OE i where i
the proof of Theorem 12 of [8] it follows that this function is FNP NP[loglogn+O(1)] -complete.
3 We say that a multi-valued partial function F is computable by a nondeterministic Turing machine N if,
for any x in the domain of F , an halting computation path of N(x) exists and any halting computation path of
outputs a value of F (x).
By making use of techniques similar to those of the proof of Proposition 2, it is easy to see
that, since B is in poly-APX, two algorithms t B
2 exist such that, for any fixed r ? 1,
many-one reduction from B r to multisat. Moreover, since A r0 is
FNP NP[log log n+O(1)] -hard, then a many-one reduction (t M
exists from multisat to A r 0
Finally, let t A
2 be the functions witnessing the self-improvability of A.
The AP-reduction from B to A can then be derived as follows:
\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma! x 00
\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma\Gamma! x 000
y
It is easy to see that if y 000 is an r-approximate solution for the instance x 000 of A, then y is an
r-approximate solution of the instance x of B. That is, B is AP-reducible to A with
The above theorem cannot be proved without the dependency of both f and g on r in the
definition of AP-reducibility. Indeed, it is possible to prove that if only g has this property
then, unless the polynomial-time hierarchy collapses, a self-improvable problem A exists such
that A 2 is FNP NP[loglogn+O(1)] -hard and A is not poly-APX-complete.
In order to characterize APX-complete problems, we have to define a different combinatorial
property. Intuitively, this property states that it is possible to merge several instances into one
instance in an approximation preserving fashion.
Definition 15. An NPO problem A is linearly additive if a constant fi and two algorithms
exist such that, for any rational r ? 1 and for any sequence x of instances
of A, x r) is an instance of A and, for any y 0 2 A 1+(r\Gamma1)fi=k
each y i is an r-approximate solution of x i . Moreover, the
running time of t 1 and t 2 is polynomial for every fixed r.
Theorem 11. An APX problem A is APX-complete if and only if it is linearly additive and
a constant r 0 exists such that A r0 is FNP NP[1] -hard.
Proof. Let A be an r A -approximable APX-complete problem. From the proof of Proposition 4 a
constant r 0 exists such that A r 0
is hard for FNP NP[1] . In order to prove the linear additivity, fix
any r ? 1 and let x be instances of A. Without loss of generality, we can assume r ! r A
(otherwise the k instances can be r-approximated by using the r A -approximate algorithm).
For any the problem of finding an r-approximate solution y i for x i is reducible
to the problem of constructively solving a set of dlog r r A e instances of Partition. Observe
that dlog r r A e - c=(r \Gamma 1) for a certain constant c depending on r A . Moreover, we claim that a
constant fl exists such that constructively solving kc=(r \Gamma 1) instances of Partition is reducible
to 1)=kc)-approximating a single instance of A (indeed, this can be shown along the
lines of the proof of Proposition 4). That is, A is linearly additive with
Conversely, let A be a linearly additive APX problem such that A r0 is FNP NP[1] -hard for
some r 0 and let B be an r B -approximable problem. Given an instance x of B, for any r ?
1 we can reduce the problem of finding an r-approximate solution for x to the problem of
constructively solving c=(r \Gamma 1) instances of Partition, for a proper constant c not depending
on r. Each of these questions is reducible to A r 0
, since any NP problem can be constructively
solved by an FNP NP[1] function. From linear additivity, it follows that r 0 -approximating c=(r\Gamma1)
instances of A is reducible to (1 1)=c)-approximating a single instance of A.
This is an AP-reduction from B to A with
Note that linear additivity plays for APX more or less the same role of self-improvability for
poly-APX. These two properties are, in a certain sense, one the opposite of the other: while the
usefulness of APX-complete approximation problems to solve decision problems depends on the
performance ratio and does not depend on the size of the instance, the usefulness of poly-APX-
complete approximation problems depends on the size of the instance and does not depend
on the performance ratio. Indeed, it is possible to prove that no APX-complete problem can
be self-improvable (unless and that no poly-APX-complete problem can be linearly
additive (unless the polynomial-time hierarchy collapses).
It is now an interesting question to find a characterizing combinatorial property of log-APX-
complete problems. Indeed, we have not been able to establish this characterization: at present,
we can only state that it cannot be based on the self-improvability property as shown by the
following result.
Theorem 12. No log-APX-complete problem can be self-improvable unless the polynomial-time
hierarchy collapses.
Proof. Let us consider the optimization problem Max Number of Satisfiable Formulas
(in short, MNSF) defined as follows.
Instance: Set of m boolean formulas OE in 3CNF, such that OE 1 is a tautology and
is the size of the input instance.
Solution: Truth assignment - to the variables of OE
Measure: The number of satisfied formulas, i.e., jfi : OE i is satisfied by -gj.
Clearly, MNSF is in log-APX, since the measure of any assignment - is at least 1, and the
optimum value is always smaller than log n, where n is the size of the input. We will show that,
for any r ! 2, MNSF r is hard for FNP NP[log loglog n\Gamma1] .
Given log log n queries to an NP-complete language (of size polynomial in n) x
we can construct an instance of MNSF where OE 1 is a tautology and, for i - 1,
the formulas OE are satisfiable if and only if at least i instances among
are yes-instances (these formulas can be easily constructed using the standard
proof of Cook's theorem). Note that adding dummy clauses to some
formulas, we can achieve the bound m - log jOE j. Moreover, from an r-approximate
solution for \Phi we can decide how many instances in x log logn are yes-instances, and we
can also recover solutions for such instances. That is, any function in FNP NP[loglog logn\Gamma1] is
many-one reducible to MNSF r .
Let A be a self-improvable log-APX-complete problem. Then, for any function F 2
FNP NP[log log logn\Gamma1] , F-mvMNSF 1:5 -mvA 1+ff=2 -mvA 2 16 where ff is the constant in the AP-
reduction from MNSF to A and where the last reduction is due to the self-improvability of
A. Thus, for any x, computing F (x) is reducible to finding a 2 16 -approximate solution for an
instance x 0 with jx 0 j - jxj c for a certain constant c. Since A 2 log-APX, it is possible to find
in polynomial time a (k log jx 0 j)-approximate solution y for x 0 where k is a constant. From
y, by means of binary search techniques, we can find a 2 16 -approximate solution for x 0 using
adaptive queries to NP where
the last inequality surely holds for sufficiently large jxj. Thus,
FNP NP[log loglog n\Gamma1] ' FNP NP[loglog logn\Gamma2]
which implies the collapse of the polynomial-time hierarchy [40]. ut
As a consequence of the above theorem and of the results of [26], we conjecture that the
minimum set cover problem is not self-improvable.
--R
"Proof verification and hardness of approximation problems"
Structural complexity I.
"Bounded queries to SAT and the Boolean hierarchy"
"On the complexity of approximating the independent set problem"
Introduction to the theory of complexity.
"On the query complexity of clique size and maximum satisfiability"
"A machine model for NP-approximation problems and the revenge of the Boolean hierarchy"
"On bounded queries and approximation"
"The complexity of theorem proving procedures"
"A compendium of NP optimization problems"
"Completeness in approximation classes"
"Relative Complexity of Evaluating the Optimum Cost and Constructing the Optimum for Maximization Problems"
"On approximation scheme preserving reducibility and its applications"
"Approximating the minimum degree spanning tree to within one from the optimal degree"
Computers and intractability: a guide to the theory of NP-completeness
"New 3/4-approximation algorithms for the maximum satisfiability problem"
"The NP-completeness of edge-coloring"
"Decision trees and downward closures"
"Approximation algorithms for combinatorial problems"
"The polynomial time hierarchy collapses if the Boolean hierarchy collapses"
"ERRATUM: The Polynomial Time Hierarchy Collapses if the Boolean Hierarchy Collapses"
On the approximability of NP-complete optimization problems
"Polynomially bounded minimization problems that are hard to approximate"
"An efficient approximation scheme for the one-dimensional bin packing problem"
"On syntactic versus computational views of approximability"
"Approximation properties of NP minimization classes"
"The complexity of optimization problems"
"On the structure of polynomial-time reducibility"
"On fl-reducibility versus polynomial time many-one reducibility"
"On the hardness of approximating minimization problems"
"Lecture notes on approximation algorithms"
"On approximation preserving reductions: Complete problems and robust measures"
"Quantifiers and approximation"
Computational complexity.
"Optimization, approximation, and complexity classes"
"Bounds for assembly line balancing heuristics"
"Graph isomorphism is in the low hierarchy"
"Continuous reductions among combinatorial optimization problems"
"Bounded query computations"
"Assembly line balancing as generalized bin packing"
"On the approximation of maximum satisfiability"
--TR
--CTR
Taneli Mielikinen , Esko Ukkonen, The complexity of maximum matroid-greedoid intersection and weighted greedoid maximization, Discrete Applied Mathematics, v.154 n.4, p.684-691, 15 March 2006
Tapio Elomaa , Matti Kriinen, The Difficulty of Reduced Error Pruning of Leveled Branching Programs, Annals of Mathematics and Artificial Intelligence, v.41 n.1, p.111-124, May 2004
Andreas Bley, On the complexity of vertex-disjoint length-restricted path problems, Computational Complexity, v.12 n.3-4, p.131-149, September 2004
Bruno Escoffier , Vangelis Th. Paschos, Completeness in approximation classes beyond APX, Theoretical Computer Science, v.359 n.1, p.369-377, 14 August 2006 | approximation algorithms;complexity classes;reducibilities |
347814 | A computational study of routing algorithms for realistic transportation networks. | We carry out an experimental analysis of a number of shortest-path (routing) algorithms investigated in the context of the TRANSIMS (TRansportation ANalysis and SIMulation System) project. The main focus of the paper is to study how various heuristic as well as exact solutions and associated data structures affect the computational performance of the software developed for realistic transportation networks. For this purpose we have used a road network representing, with high degree of resolution, the Dallas Fort-Worth urban area.We discuss and experimentally analyze various one-to-one shortest-path algorithms. These include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances.Computational results are provided to compare empirically the efficiency of various algorithms. Our studies indicate that a modified Dijkstra's algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies. | Introduction
TRANSIMS is a multi-year project at the Los Alamos National Laboratory and is funded by the
Department of Transportation and by the Environmental Protection Agency. The main purpose
of TRANSIMS is to develop new methods for studying transportation planning questions. A
prototypical question considered in this context would be to study the economic and social
impact of building a new freeway in a large metropolitan area. We refer the reader to [TR+95a]
and the web-site http://transims.tsasa.lanl.gov to obtain extensive details about the
TRANSIMS project.
The main goal of the paper is to describe the computational experiences in engineering various
path finding algorithms in the context of TRANSIMS. Most of the algorithms discussed
here are not new; they have been discussed in the Operations Research and Computer Science
community. Although extensive research has been done on theoretical and experimental evaluation
of shortest path algorithms, most of the empirical research has focused on randomly
generated networks and special classes of networks such as grids. In contrast, not much work
has been done to study the computational behavior of shortest path and related routing algorithms
on realistic traffic networks. The realistic networks differ from random networks as well
as from homogeneous (structured networks) in the following significant ways:
(i) Realistic networks typically have a very low average degree. In fact in our case the average
degree of the network was around 2.6. Similar numbers have been reported in [ZN98]. In
contrast random networks used in [Pa84] have in some cases average degree of up to 10.
(ii) Realistic networks are not very uniform. In fact, one typically sees one or two large clusters
(downtown and neighboring areas) and then small clusters spread out throughout the entire
area of interest.
(iii) For most empirical studies with random networks, the edge weights are chosen independently
and uniformly at random from a given interval. In contrast, realistic networks typically
have short links.
With the above reasons and specific application in mind, the main focus of this paper is to
carry out experimental analysis of a number of shortest path algorithms on real transportation
network and subject to practical constraints imposed by the overall system. See also Section 6,
the point "Peculiarities of the network and its Effect" for some intuition what features of the
network we consider crucial for our observations.
The rest of the report is organized as follows. Section 2 contains problem statement and
related discussion. In Section 3, we discuss the various algorithms evaluated in this paper. Section
4 summarizes the results obtained. Section 5 describes our experimental setup. Section 6
describes the experimental results obtained. Section 7 contains a detailed discussion of our re-
sults. Finally, in Section 8 we give concluding remarks and directions for future research. We
have also included an Appendix (Section 8.1) that describes the relevant algorithms for finding
shortest paths in detail.
Problem specification and justification
The problems discussed above can be formally described as follows: let G(V; E) be a (un)directed
graph. Each edge e 2 E has one attribute w(e) denoting the weight (or cost) of the edge e. Here,
we assume that the weights are non-negative floating point numbers.
Definition 2.1 One-to-One Shortest
Given a directed weighted, graph G, a source destination pair (s; d) find a shortest (with respect to w)
path p in G from s to d.
Note that our experiments are carried out for shortest path between a pair of nodes, as
against finding shortest path trees. Much of the literature on experimental analysis uses the
second measure to gauge the efficiency. Our choice to consider the running time of the one-to-
one shortest path computation is motivated by the following observations:
1. In our setting we need to compute shortest paths for roughly a million travelers. In highly
detailed networks, most of these travelers have different starting points (for example, in
Portland we have 1.5 million travelers and 200 000 possible starting locations). Thus, for
any given starting location, we could re-use the tree computation only for about ten other
travelers.
2. We wanted our algorithms to be extensible to take into account additional features/constraints
imposed by the system. For example, each traveler typically has a different starting time
for his/her trip. Since we use our algorithms for time dependent networks (networks
in which edge weights vary with time), the shortest path tree will be different for each
traveler. As a second example we need to find paths for travelers with individual mode
choices in a multi-modal network. Formally, we are given a directed labeled, weighted,
graph G representing a transportation network with the labels on edges representing the
various modal attributes (e.g. a label t might represent a rail line). There the goal is to find
shortest (simple) paths subject to certain labeling constraints on the set of feasible paths.
In general, the criteria for path selection varies so much from traveler to traveler that the
additional overhead for the "re-use" of information is unlikely to pay off.
3. The TRANSIMS framework allows us to use paths that are not necessarily optimal. This
motivates investigation of very fast heuristic algorithms that obtain only near optimal
paths (e.g. the modified A algorithm discussed here). For most of these heuristics, the
idea is to bias a more focused search towards the destination - thus naturally motivating
the study of one-one shortest path algorithms.
4. Finally, the networks we anticipate to deal with contain more than 80 000 nodes and
around 120 000 edges. For such networks storing all shortest path trees amounts to huge
memory overheads.
3 Choice of algorithms
Important objectives used to evaluate the performance of the algorithms include (i) time taken
for computation on real networks, (ii) quality of solution obtained, (iii) ease of implementation
and (iv) extensibility of the algorithm for solving other variants of the shortest path problem. A
number of interesting engineering questions were encountered in the process. We experimentally
evaluated a number of variants of Dijkstra's algorithm. The basic algorithm was chosen
due to the recommendations made in Cherkassky, Goldberg and Radzik [CGR96] and Zhan
and Noon [ZN98]. The algorithms studied were:
Dijkstra's algorithm with Binary Heaps [CGR96],
A algorithm proposed in AI literature and analyzed by Sedgewick and Vitter [SV86],
a modification of the A algorithm that we will describe below, and alluded to in [SV86].
A bidirectional version of Dijkstra's algorithm described in [Ma, LR89] and analyzed by [LR89]
was also considered. We briefly recall the A algorithm and the modification proposed. Details
of these algorithms can be found in the Appendix.
When the underlying network is (near) Euclidean it is possible to improve the average case
performance of Dijkstra's algorithm by exploiting the inherent geometric information that is
ignored by the classical path finding algorithms. The basic idea behind improving the performance
of Dijkstra's algorithm is from [SV86, HNR68] and can be described as follows. In order
to build a shortest path from s to t, we use the original distance estimate for the fringe vertex
such as x, i.e. from s to x (as before) plus the Euclidean distance from x to t. Thus we use global
information about the graph to guide our search for shortest path from s to t. The resulting
algorithm runs much faster than Dijkstra's algorithm on typical graphs for the following intuitive
reasons: (i) The shortest path tree grows in the direction of t and (ii) The search of the
shortest path can be terminated as soon as t is added to the shortest path tree.
We note that the above algorithms, only require that the Euclidean distance between any
two nodes is a valid lower bound on the actual shortest distance between these nodes. This is
typically the case for road networks; the link distance between two nodes in a road network
typically accounts for curves, bridges, etc. and is at least the Euclidean distance between the
two nodes. Moreover in the context of TRANSIMS, we need to find fastest paths, i.e. the cost
function used to calculate shortest paths is the time taken to traverse the link. Such calculations
need an upper bound on the maximum allowable speed. To adequately account for all these
inaccuracies, we determine an appropriate lower bound factor between Euclidean distance and
assumed delay on a link in a preprocessing step.
We can now modify this algorithm by giving an appropriate weight to the distance from
x to t. By choosing an appropriate multiplicative factor, we can increase the contribution of
the second component in calculating the label of a vertex. From a intuitive standpoint this
corresponds to giving the destination a high potential, in effect biasing the search towards
the destination. This modification will in general not yield shortest paths, nevertheless our
experimental results suggest that the errors produced can be kept reasonably small.
4 Summary of Results
We are now ready to summarize the main results and conclusions of this paper. As already
stated the main focus of the paper is the engineering and tuning of well known shortest path
algorithms in a practical setting. Another goal of this paper to provide reasons for and against
certain implementations from a practical standpoint. We believe that our conclusions along
with the earlier results in [ZN98, CGR96] provide practitioners a useful basis to select appropriate
algorithms/implementations in the context of transportation networks. The general re-
sults/conclusions of this paper are summarized below.
1. We conclude that the simple Binary heap implementation of Dijkstra's algorithm is a
good choice for finding optimal routes in real road transportation networks. Specifically,
we found that certain types of data structure fine tuning did not significantly improve the
performance of our implementation.
2. Our results suggest that heuristic solutions using the underlying geometric structure of
the graphs are attractive candidates for future research. Our experimental results motivated
the formulation and implementation of an extremely fast heuristic extension of the
basic A algorithm. The parameterized time/quality trade-off the algorithm achieves in
our setting appears to be quite promising.
3. Our study suggests that bidirectional variation of Dijkstra's algorithm is not suitable for
transportation planning. Our conclusions are based on two factors: (i) the algorithm is not
extensible to more general path problems and (ii) the running time does not outperform
the other exact algorithms considered.
5 Experimental Setup and Methodology
In this section we describe the computational results of our implementations. In order to anchor
research in realistic problems, TRANSIMS uses example cases called Case studies (See [CS97]
for complete details). This allows us to test the effectiveness of our algorithms on real life data.
The case study just concluded focused on Dallas Fort-Worth (DFW) Metropolitan area and was
done in conjunction with Municipal Planning Organization (MPO) (known as North Central
Texas Council of Governments (NCTCOG)). We generated trips for the whole DFW area for a
hour period. The input for each traveler has the following format: (starting time, starting
location, ending location). 4 There are 10.3 million trips over 24 hours. The number of nodes
4 This is roughly correct, the reality is more complicated, [NB97, CS97].
and links in the Dallas network is roughly 9863, 14750 respectively. The average degree of a
node in the network was 2.6. We route all these trips through the so-called focused network. It
has all freeway links, most major arterials, etc. Inside this network, there is an area where all
streets, including local streets, are contained in the data base. This is the study area. We initially
routed all trips between 5am and 10am, but only the trips which went through the study area
were retained, resulting in approx. 300 000 trips. These 300 000 trips were re-planned over and
over again in iteration with the micro-simulation(s). For more details, see, e.g., [NB97, CS97].
A 3% random sample of these trips were used for our computational experiments.
Preparing the network. The data received from DFW metro had a number of inadequacies
from the point of view of performing the experimental analysis. These had to be corrected
before carrying out the analysis. We mention a few important ones here. First, the network was
found to have a number of disconnected components (small islands). We did not consider (o; d)
pairs in different components. Second, a more serious problem from an algorithmic standpoint
was the fact that for a number of links, the length was less than the actual Euclidean distance
between the the two end points. In most cases, this was due to an artificial convention used
by the DFW transportation planners (so-called centroid connectors always have length 10 m,
whatever the Euclidean distance), but in some cases it pointed to data errors. In any case,
this discrepancy disallows effective implementation of A type algorithms. For this reason
we introduce the notion of the "normalized" network: For all links with length less than the
Euclidean distance, we set the reported length to be equal to the Euclidean distance. Note here,
that we take the Euclidean distance only as a lower bound on shortest path in the network.
Recall that if we want to compute fastest path (in terms of time taken) instead of shortest,
we also have to make assumptions regarding the maximum allowable speed in the network
to determine a conservative lower bound on the minimal travel time between points in the
network.
Preliminary experimental analysis was carried out for the following network modifications
that could be helpful in improving the efficiency of our algorithms. These include: (i) Removing
nodes with degrees less than 3: (Includes collapsing paths and also leaf nodes) (ii) Modifying
nodes of degree 3: (Replace it by a triangle)
Hardware and Software Support. The experiments were performed on a Sun UltraSparc CPU
with 250 MHz, running under Solaris 2.5. 2 gigabyte main memory were shared with 13 other
CPUs; our own memory usage was always 150 MB or less. In general, we used the SUN Workshop
CC compiler with optimization flag -fast. (We also performed an experiment on the influence
of different optimization options without seeing significant differences.) The advantage of
the multiprocessor machine was reproducibility of the results. This was due to the fact that the
operating system does not typically need to interrupt a live process; requests by other processes
were assigned to other CPUs.
Experimental Method 10,000 arbitrary plans were picked from the case study. We used the
timing mechanism provided by the operating system with granularity .01 seconds (1 tick). Experiments
were performed only if the system load did not exceed the number of available
processors, i.e. processors were not shared. As long as this condition was not violated during
the experiment, the running times were fairly consistent, usually within relative errors of 3%.
We used (a subset) of the following values measurable for a single or a specific number of
computations to conclude the reported results
(average) running time excluding i/o
number of fringe/expanded nodes
pictures of fringe/expanded nodes
maximum heap size
number of links and length of the path
Software Design We used the object oriented features as well as the templating mechanism
of C++ to easily combine different implementations. We also used preprocessor directives and
macros. As we do not want to introduce any unnecessary run time overhead, we avoid for
example the concept of virtual inheritance. The software system has classes that encapsulate
the following elements of the computation:
network (extensibility and different levels of detail lead to a small, linear hierarchy)
plans: (o; d) pairs and complete paths with time stamps
priority queue (heap)
labeling of the graph and using the priority queue
storing the shortest path tree
Dijkstra's algorithm
As expected, this approach leads to an apparent overhead of function calls. Nevertheless,
the compiler optimization detects most such overheads. Specifically, an earlier non templated
implementation achieved roughly the same performance as the corresponding instance of the
templated version. The results were consistent with similar observations when working on
mini-examples. The above explanation was also confirmed by the outcome of our experiments:
We observed, that reducing the instruction count does not reduce the observed running time as
might be expected. Assuming we would have a major overhead from high level constructs, we
would expect to see a strong influence of the number of instructions executed on the running
time observed.
6 Experimental Results
Design Issues about Data Structures We begin with the design decisions regarding the data
structures used.
A number of alternative data structures were considered to investigate if they results in substantial
improvement in the running time of the algorithm. The alternatives tested included
the following. (i) Arrays versus Heaps , (ii) Deferred Update, (iii) Hash Tables for Storing
Graphs, (iv) Smart Label Reset (v) Heap variations, and (vi) struct of arrays vs. array of structs.
Appendix contains a more detailed discussion of these issues. We found, that indeed good
programming practice, using common sense to avoid unnecessary computation and textbook
knowledge on reasonable data structures are useful to get good running times. For the alternatives
mentioned above, we did not find substantial improvement in the running time. More
precisely, the differences we found were bigger than the unavoidable noise on a multi-user
computing environment. Nevertheless, they were all below 10% relative difference. A brief
discussion of various data structures tried can be found in the Appendix.
Analysis of results. The plain Dijkstra, using static delays calculated from reported free flow
speeds, produced roughly 100 plans per second. Figure 1 illustrates the improvement obtained
by the A modification. The numbers shown in the corner of the network snapshots tell an
average (of 100 repetitions to destroy cache effects between subsequent runs) running time for
this particular O-D-pair, in system ticks. It also gives the number of expanded and fringe nodes.
Note that we have used different scales in order to clearly depict the set of expanded nodes.
Overall we found that A on the normalized network (having removed network anomalies as
explained above) is faster than basic Dijkstra's algorithm by roughly a factor of 2.
Modified A (Overdo Heuristic) Next consider the modified A algorithm - the heuristic is
parameterized by the multiplicative factor used to weigh the Euclidean distance to the destination
against the distance from the source in the already computed tree. We call this the
overdo parameter. This approach, can be seen as changing the conservative lower bound used
for in the A algorithm into an "expected" or "approximated" lower bound. Experimental evidence
suggests that even large overdo factors usually yield reasonable paths. Note that this
nice behavior might fail as soon as the link delays are not at all directly related to the link
length (Euclidean distance between the endpoints), as might be expected in a network with
link lengths proportional to travel times in a partially congested city. As a result it is natural to
discuss the time/quality trade-off of the heuristic as a function of the overdo parameter. Figure 2
summarizes the performance. In the figure the X-axis represents the overdo factor, being varied
from 0 to 100 in steps of 1. The Y-axis is used for multiple attributes which we explain below.
First, the Y axis is used to represent the average running time per plan. For this attribute, we
use the log scale with the unit denoting 10 milliseconds. As depicted by the solid line, the average
time taken without any overdo at all is 12.9 milliseconds per plan. This represents the
base measurement (without taking the geometric information into account, but including time
taken for computing of Euclidean distances). Next, for overdo value of 10 and 99 the running
times are respectively 2.53 and .308 milliseconds. On the other hand, the quality of the solution
produced by the heuristic worsens as the overdo factor is increased. We used two quantities
to measure the error - (i) the maximum relative error incurred over 10000 plans and (ii) the
ticks 2.40, #exp 6179, #fr 233
ticks 0.64, #exp 1446, #fr 316
Figure
1: Figure illustrating the number of expanded nodes while running (i) Dijkstra (ii) A
algorithms. The figures clearly show the A heuristic is much more efficient in terms of the
nodes it visits. In both the graphs, the path is outlined as a dark line. The fringe nodes and the
expanded nodes are marked as dark spots. The underlying network is shown in light grey. The
source node is marked with a big circle, the destination with a small one. Notice the different
scales of the figures. 9
time
Figure
2: Figure illustrating the trade-off between the running time and quality of paths as a
function of the overdo-paramameter. The X axis represents the overdo factor from 0 to 100.
The Y axis is used to represent three quantities plotted on a log scale - (i) running time, (ii)
Maximum relative error and (iii) fraction of plans with relative error greater than a threshold
value. The threshold values chosen are 0%,
time
Figure
3: Figure illustrating the trade-off between the running time and quality of paths as
a function of the overdo-parameter on the normalized network. The meaning of the axis and
depicted things is the same as in the previous figure.
fraction of plans with errors more than a given threshold error. Both types of errors are shown
on the Y axis. The maximum relative error (plot marked with *) ranges from 0 for overdo factor 0
to 16% for overdo value 99. For the other error measure, we plot one curve for each threshold
error of 0%, 10%. The following conclusions can be drawn from our results.
1. The running times improve significantly as the overdo factor is increased. Specifically the
improvements are a factor 5 for overdo parameter 10 and almost a factor 40 for overdo
parameter 99.
2. In contrast, the quality of solution worsens much more slowly. Specifically, the maximum
error is no worse than 16% for the maximum overdo factor. Moreover, although the
number of erroneous plans is quite high (almost all plans are erroneous for overdo factor
of 99), most of them have small relative errors. To illustrate this, note that only around
15% of them have relative error of 5% or more.
3. The experiments and the graphs suggest an "optimal" value of overdo factor for which
the running time is significantly improved while the solution quality is not too bad. These
experiments are a step in trying to find an empirical time/performance trade-off as a
function of the overdo parameter.
4. As seen in Figure 3 the overall quality of the results shows a similar tradeoff if we switch
to the normalized network. The only difference is that the errors are reduced for a given
value of the overdo parameter.
5. As depicted in Figure 4, the number of plans worse than a certain relative error decreases
(roughly) exponentially with this relative error. This characteristic does not depend on
the overdo factor.
6. We also found that the near-optimal paths produced were visually acceptable and represented
a feasible alternative route guiding mechanism. This method finds alternative
paths that are quite different than ones found by the k-shortest path algorithms and seem
more natural. Intuitively, the k-shortest path algorithms, find paths very similar to the
overall shortest path, except for a few local changes.
7. The counterintuitive local maximum for overdo value 3.2 in Figure 3 can be explained by
the example depicted in Figure 5.
the optimal length is 21,
for overdo parameter 2 we get a solution of length 22 Here node B gets inserted with
a value of 24 opposed to values 29 and 25 of A and C. As these values for A and B
are bigger than the resulting path using B, this path stays final.
for overdo parameter 4 we get again a solution of length 21. This stems from the fact
that now C gets inserted into the heap with the value 33 where as B with a value of
overdo factor 3.0
overdo factor 2.0
overdo factor 1.5
overdo factor 1.2
overdo factor 1.1
Figure
4: The distribution of wrong plans for different overdo-parameters in the normalized
network for Dallas Ft-Worth. In X direction we change the "notion of a bad plan" in terms of
relative error, in Y direction we show the fraction of plans that is classified to be "bad" with the
current notion of "wrong".
It is easy to see that such examples can be scaled and embedded into larger graphs. Since
the maximum error stems from one particular shortest path question, it is not too surprising
to encounter such a situation.
Peculiarities of the network and its Effect In the context of TRANSIMS, where we needed to
find one-to-one shortest paths, we observed possibly interesting influence of the underlying
network and its geometric structure on the performance of the algorithms. We expect similar
characteristics to be visible in other road networks as well, possibly modified by the existence of
rivers or other similar obstacles. Note that the network is almost Euclidean and (near) homogenous
to justify the following intuition: Dijkstra's algorithm explores the nodes of a network in
a circular fashion. During the run we see roughly a disc of expanded nodes and a small ring
of fringe nodes (nodes in the heap) around them. For planar and near panar graphs it has
been observed that the heap size is O(
n) with high probability. This provides one possible
explanation of why the maximum heap sizes in our experiments was close to 500. In particular,
even if the area of the circular (in number of nodes) reaches the size of the network (10 000),
the ring of fringe nodes is roughly proportional to the circumference of the circular region (and
thus roughly proportional to
We believe that this homogenous and almost Euclidean
structure is also the reason for our observations about the modified A algorithm. The above
discussion provides at least an intuitive explanation of why special algorithms such as A might
perform better on Euclidean and close to Euclidean networks.
A
Figure
5: Example network having a local maximum for the computed path length for increasing
overdo parameter; the edges are marked with (Euclidean distance, reported length).
Effect of Memory access times In our experiments we observed, that changes in the implementation
of the priority queue have minimal influence on the overall running time. In contrast,
the instruction count profiling (done with a program called "quantify") pinpoints the priority
queue to be the main contributer to the overall number of instructions. Combining these two
facts, we conclude that the running time we observe is heavily dependent on the time it takes
to access the graph representation that do do not fit the cache. Thus the processor spends a
significant amount of time waiting.
We expect further improvement of the running time by concentrating on the memory ac-
cesses, for example by making the graph representation more compact, or optimize accesses
by choosing memory location of a node according to the topology of the graph. In general the
conclusions of our paper motivate the need for design and analysis of algorithms that take the
memory access latency into account.
7 Discussion of Results
First, we note that the running times for the plain Dijkstra are reasonable as well as sufficient in
the context of the TRANSIMS project. Quantitatively, this means the following: TRANSIMS is
run in iterations between the micro-simulation, and the planner modules, of which the shortest
path finding routine is one part. We have recently begun research for the next case study
project for TRANSIMS. This case study is going to be done in Portland, Oregon and was chosen
to demonstrate the validate our ideas for multi-modal time dependent networks with public
transportation following a scheduled movement. Our initial study suggests that we now take
sec/trip as opposed to .01 sec/trip in the Dallas Ft-Worth case [Ko98]. All these extensions
are important from the standpoint of finding algorithms for realistic transportation routing
problems. We comment on this in some detail below. Multi-modal networks are an integral
part of most MPO's. Finding optimal (or near-optimal) routes in this environment therefore
constitutes a real problem. In the past, solutions for routing in such networks was handled
ticks 0.10, #exp 140, #fr 190
Figure
Figure illustrating two instances of Dijkstra's algorithms with a very high overdo
parameter start at origin and destination respectively. One of them really creates the shown
path, the beginning of the other path is visible as a "cloud" of expanded nodes
in an ad hoc fashion. The basic idea (discussed in detail in [BJM98]) here is to use regular
expressions to specify modal constraints. In [BJM98, JBM98], we have proposed models and
polynomial time algorithms to solve this and related problems. Next consider another important
extension - namely to time dependent networks. We assume that the edge lengths are
modeled by monotonic non-decreasing, piecewise linear functions. These are called the link
traversal functions. For a function f associated with a link denotes the time
of arrival at b when starting at time x at a. By using an appropriate extension of the basic
Dijkstra's algorithm, one can calculate optimal paths in such networks. Our preliminary results
on these topics in the context of TRANSIMS can be found in [Ko98, JBM98]. The Portland
network we are intending to use has about 120 000 links and about 80 000 nodes. Simulating
hours of traffic on this network will take about 24 hours computing time on our 14 CPU ma-
chine. There will be about 1.5 million trips on this network. Routing all these trips should take
9 days on a single CPU and thus less than 1 day on our 14 CPU
machine. Since re-routing typically concerns only 10% of the population, we would need less
than 3 hours of computing time for the re-routing part of one iteration, still significantly less
than the micro-simulation needs.
Our results and the constraints placed by the functionality requirement of the overall system
imply that bidirectional version of Dijkstra's algorithm is not a viable alternative. Two
reasons for this are: (i) The algorithm can not be extended in a direct way to path problems in a
multi-modal and time dependent networks, and (ii) the running times of A is better than the
bidirectional variant; the modified A is much more faster.
Conclusions
The computational results presented in the previous sections demonstrate that Dijkstra's algorithm
for finding shortest paths is a viable candidate for compute route plans in a route
planning stage of a TRANSIMS like system. Thus such an algorithm should be considered
even for ITS type projects in which we need to find routes by an on-board vehicle navigation
systems.
The design of TRANSIMS lead us to consider one-to-one shortest path algorithms, as opposed
to algorithms that construct the complete shortest-path tree from a given starting (or
destination) point. As is well known, the worst-case complexity of one-to-one shortest path
algorithms is the same as of one-to-all shortest path algorithms. Yet, in terms of our practical
problem, this is not applicable. First, a one-to-one algorithm can stop as soon as the destination
is reached, saving computer time especially when trips are short (which often is the case in our
setting). Second, since our networks are roughly Euclidean, one can use this fact for heuristics
that reduce computation time even more. The A with an appropriate overdo parameter
apperas to be an attractive candidate in this regard.
Making the algorithms time-dependent in all cases slowed down the computation by a
factor of at most two. Since we are using a one-to-one approach, adding extensions that for
example include personal preferences (e.g. mode choice) are straightforward; preliminary tests
let us expect slow-downs by a factor of 30 to 50. This significant slowdown was caused by a
number of factors including the following:
(i) The network size increased by a factor of 4 and was caused by addition and splitting of
nodes and/or edges and adding public transportation. This was done to account for
activity locations, parking locations, adding virtual links joining these locations, etc.
(ii) The time dependency functions used to represent transit schedules and varying speed of
street traffic, implied increased memory and computational requirement. Initial estimates
are that the memory requirement increases by a factor of 10 and the computational time
increases by factor of 5. Moreover, different type of delay functions were used for inducing
a qualitatively different exploration of the network by the algorithm. This seems to
prohibit keeping a small number of representative time dependency functions.
(iii) The algorithm for handling modal constraints works by making multiple copies of the
original network. The algorithm is discussed in [JBM98] and preliminary computational
results are discussed in [Ko98]. This increased the memory requirement by a factor of 5
and computation time by an additional factor of 5.
Extrapolations of the results for the Portland case study show that, even with this slowdown
the route planning part of TRANSIMS still uses significantly less computing time than the
micro-simulation.
Finally, we note that under certain circumstances the one-to-one approach chosen in this
paper may also be useful for ITS applications. This would be the case when customers would
require customized route suggestions, so that re-using a shortest path tree from another calculation
may no longer be possible.
Acknowledgments
Research supported by the Department of Energy under Contract W-7405-
ENG-36. We thank the members of the TRANSIMS team in particular, Doug Anson, Chris
Barrett, Richard Beckman, Roger Frye, Terence Kelly, Marcus Rickert, Myron Stein and Patrice
Simon for providing the software infrastructure, pointers to related literature and numerous
discussions on topics related to the subject. The second author wishes to thank Myron Stein
for long discussion on related topics and for his earlier work that motivated this paper. We
also thank Joseph Cheriyan, S.S. Ravi, Prabhakar Ragde, R. Ravi and Aravind Srinivasan for
constructive comments and pointers to related literature. Finally, we thank the referees for
helpful comments and suggestions.
--R
The Design and Analysis of Computer Algo- rithms
Formal Language Constrained Path Problems to be presented at the Scandinavian Workshop on Algorithmic Theory
An Operational Description of TRANSIMS
Shortest Path algorithms: Theory and Experimental Evaluation
Computational Study of am Improved Shortest Path Algorithm
"Route Finding in Street Maps by Computers and People,"
"A Formal Basis for the Heuristic Determination of Minimum Cost Paths,"
Highway Research Board
Experimental Analysis of Routing Algorithms in in Time Dependent and Labeled Networks
"A Bidirectional Shortest Path Algorithm with Good Average Case Behavior,"
"Approximation schemes for the restricted shortest path problem,"
"A Shortest Path Algorithm with Expected Running time O( p V log V ),"
Shortest Path Algorithms: A Computational Study with C Programming Language
Using Microsimulation Feedback for trip Adaptation for Realistic Traffic in Dallas
Experiences with Iterated Traffic Microsimulations in Dallas
Implementation and Efficiency of Moore Algorithm for the Shortest Root Problem
Shortest Path Algorithms: Complexity
"Bidirectional Searching,"
"Shortest Paths in Euclidean Graphs,"
"Finding Realistic Detour by AI Search Techniques,"
Shortest Path Algorithms: An Evaluation using Real Road Networks Transportation Science
--TR
Shortest paths in Euclidean graphs
Shortest path algorithms: a computational study with the C programming language
Network flows
Approximation schemes for the restricted shortest path problem
Shortest paths algorithms
The Design and Analysis of Computer Algorithms
Formal Language Constrained Path Problems
Shortest Path Algorithms
--CTR
Michael Balmer , Nurhan Cetin , Kai Nagel , Bryan Raney, Towards Truly Agent-Based Traffic and Mobility Simulations, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.60-67, July 19-23, 2004, New York, New York
L. Fu , D. Sun , L. R. Rilett, Heuristic shortest path algorithms for transportation applications: state of the art, Computers and Operations Research, v.33 n.11, p.3324-3343, November 2006 | shortest-paths algorithms;transportation planning;design and analysis of algorithms;network design;experimental analysis |
347863 | Automatic sampling with the ratio-of-uniforms method. | Applying the ratio-of-uniforms method for generating random variates results in very efficient, fast, and easy-to-implement algorithms. However parameters for every particular type of density must be precalculated analytically. In this article we show, that the ratio-of-uniforms method is also useful for the design of a black-box algorithm suitable for a large class of distributions, including all with log-concave densities. Using polygonal envelopes and squeezes results in an algorithm that is extremely fast. In opposition to any other ratio-of-uniforms algorithm the expected number of uniform random numbers is less than two. Furthermore, we show that this method is in some sense equivalent to transformed density rejection. | Introduction
There exists a large literature on generation methods for standard continuous
distributions; see, for example, Devroye (1986). These algorithms are often especially
designed for a particular distribution and tailored to the features of each
density. However in many situations the application of standard distributions is not
adequate for a Monte-Carlo simulation. Besides sheer brute force inversion (that
is, tabulate the distribution function at many points), several universal methods
for large classes of distributions has been developed to avoid the design of special
algorithms for these cases. Some of these methods are either very slow (e.g. Devroye
(1984)) or need a slow set-up step and large tables (e.g. Ahrens and Kohrt (1981),
Marsaglia and Tsang (1984), and Devroye (1986, chap. VII)).
Recently two more efficient methods have been proposed. The transformed
density rejection by Gilks and Wild (1992) and H-ormann (1995) is an accep-
tance/rejection technique that uses the concavity of the transformed density to
generate a hat function automatically. The user only needs to provide the probability
density function and perhaps the (approximate) location of the mode. A
table method by Ahrens (1995) also is an acceptance/rejection method, but uses a
piecewise constant hat such that the area below each piece is the same. A region of
immediate acceptance makes the algorithm fast when a large number of constant
pieces is used. The tail region of the distribution is treated separately.
The ratio-of-uniforms method introduced by Kinderman and Monahan (1977)
is another flexible method that can be adjusted to a large variety of distributions.
It has become a popular transformation method to generate non-uniform random
variates, since it results in exact, efficient, fast and easy to implement algorithms.
Typically these algorithms have only a few lines of code (e.g. Barabesi (1993) gives
a survey and examples of FORTRAN codes for several standard distributions). It
is based on the following theorem.
1991 Mathematics Subject Classification. Primary: 65C10 random number generation, Sec-
ondary: 65U05 Numerical methods in probability and statistics; 11K45 Pseudo-random numbers,
Monte Carlo methods.
Key words and phrases. non-uniform random variates, ratio-of-uniforms, universal method,
adaptive method, patchwork rejection, continuous distributions, log-concave distributions,
concave distributions, transformed density rejection.
Theorem 1 (Kinderman and Monahan 1977). Let X be a random variable with
density function
R
g(x)dx, where g(x) is a positive integrable function
with support necessarily finite. If (V; U) is uniformly distributed in
For sampling random points uniformly distributed in A g rejection from a convenient
enveloping region R g is used. The basic form of the ratio-of-uniforms method
is given by algorithm rou.
Algorithm rou
Require: density f(x); enveloping region R
1: repeat
2: Generate random point (V; U) uniformly distributed in R.
3: X / V=U .
4: until U 2 - f(X).
5: return X .
Usually the input in rou is prepared by the designer of the algorithm for each
particular distribution. To reduce the number of evaluations of the density function
in step 4, squeezes are used. It is obvious that the performance of this simple
algorithm depends on the rejection constant, i.e. on the ration jRj=jAj, where jRj
denotes the area of region R. Kinderman and Monahan (1977) and others use
rejection from the minimal bounding rectangle, i.e. the smallest possible rectangle
g. This basic algorithm has been improved in several
fitting enclosing region decreases the rejection constant. Possible
choices are parallelograms (e.g. Cheng and Feast (1979)) or quadratic bounding
curves (e.g. Leva (1992)). Often it is convenient to decompose A into a countable
set of non-overlapping subregions ("composite ratio-of-uniforms method", Robertson
and Walls (1980) give a simple example). Dagpunar (1988, p. 65) considers the
possibility of an enclosing polygon.
In this paper we develop a new algorithm that uses polygonal envelopes and
squeezes. Random variates inside the squeeze are generated by mere inversion and
therefore in opposite to any other ratio-of-uniforms method less than two uniform
random numbers are required. For a large class of distributions, including all log-concave
distributions, it is possible to construct envelope and squeeze automatically.
Moreover we show that the new algorithm is in some sense equivalent to transformed
density rejection.
The new method has several advantages:
ffl Envelopes and squeezes are constructed automatically. Only the probability density
function is necessary.
Only random numbers are necessary, where % ? 0 can be made arbitrarily
small.
ffl For small % the method is close to inversion and thus the resulting random variates
can be used for variance reduction techniques. Moreover the structure of the
resulting random variates is similar to the that of the underlying random uniform
random number generator. Hence the non-uniform random variates inherit its
quality properties.
1 Moreover the method has been extended: Wakefield, Gelfand, and Smith (1991) replaces the
function by a more general strictly increasing differentiable function q(u).
Stadlober (1989, 1990) gives a modification for discrete distributions.
Jones and Lunn (1996) embeds this method into a "general random variate generation framework".
Wakefield et al. (1991) and Stef-anescu and V-aduva (1987) apply this method to the generation
of multivariate distributions.
ffl It avoids some possible defects in the quality of the resulting pseudo-random
variates that have been reported for the ratio-of-uniforms method (see H-ormann
1994a; H-ormann 1994b).
ffl It is the first ratio-of-uniforms method and the first implementation of transformed
density rejection that requires less than two random numbers.
In section 2 we give an outline of this new approach and in section 4 we discuss
the problem of getting a proper envelope for the region R. Section 5 describes
the algorithm in detail. Section 3 shows that this algorithm is applicable for all T -
concave densities, with T
x. Remarks on the quality of random numbers
generated with the new algorithm are given in section 6.
2. The method
Enveloping polygons. We are given a distribution with probability density function
R
g(x)dx with convex set A g . Notice that g must be continuous
and bounded since otherwise A g were not convex. To simplify the development
of our method we first assume unbounded support for g. (This restriction will be
dropped later.)
For such a distribution it is easy to make an enveloping polygon: Select a couple
of points c i , the boundary of A and use the tangents at these
points as edges of the enclosing polygon P e (see figure 1). We denote the vertices
of P e by m i . These are simply the intersection points of the tangents. Obviously
our choice of the construction points of the tangents has to result in a bounded
polygon P e . The procedure even works if the tangents are not unique for a point
(v; u), i.e. if g(x) is not differentiable in Furthermore it is very simple to
construct squeezes: Take the inside of the polygon P s with vertices c i .
construction points
tangent
Figure
1. Polygonal envelope and squeeze for convex set A g .
Sampling from the enveloping polygon. Notice that the origin (0; 0) is always
contained in the polygon P e . Moreover every straight line through the origin
corresponds to an thus its intersection with A is always connected.
Therefore we use c for the first construction point and the v-axis as its
tangent. To sample uniformly from the enclosing polygon we triangulate P e and
making segments S i , illustrates the
situation. Segment S i has the vertices c 0 , c for the
last segment. Each segment is divided into the triangle S s
inside the squeeze (dark
Sn
Figure
2. Triangulation of enveloping polygon
shaded) and a triangle S
outside (light shaded). Notice that the segments S 0 and
Sn have only three vertices and no triangles S s
0 and S s
n .
To generate a random point uniformly distributed in P e , we first have to sample
from the discrete distribution with probability vector proportional to (jS 0 j; jS 1 j;
to select a segment and further a triangle S
i or S s
i . This can be done by
Algorithm get segment
Require: list of segments
1: Generate R - U(0; 1).
2: Find the smallest k, such that
i-k
3: if P
4: return triangle S s
k .
5: else
return triangle S
k .
For step 2 indexed search (or guide tables) is an appropriate method (Chen and
Asau (1974), see also Devroye (1986, xIII.2.4)).
Uniformly distributed points in a triangle (v can be generated by the
following simple algorithm (Devroye 1986, p. 570):
Algorithm triangle
Require: triangle (v
1: Generate R 1
2: if R 1
3: return (1
For sampling from S s
this algorithm can be much improved. Every point in such
a triangle can immediately be accepted without evaluating the probability density
function and thus we are only interested in the ratio of the components. Since the
i has vertex c we arrive at
(R
(R
where c i;j is the j-th component of vertex c i , and R again is a (0; 1)-
uniform random variate by the ration-of-uniforms theorem, since 0 - R 2
(Kinderman and Monahan 1977). Notice that we save one uniform random number
in the domain P s by this method. Furthermore we can reuse the random number
R from routine get segment by R
risk. We
find
i-k
i-k
Sampling from P s can then be seen as inversion from the cumulated distribution
function defined by the boundary of the squeeze polygon. Thus for a ratio jP s j=jP e j
close to 1 we have almost inversion for generating random variates. The inversion
method has two advantages and is thus favored by the simulation community (see
Bratley, Fox, and Schrage (1983)): (1) The structure of the generator is simple and
can easily be investigated (see section 6). (2) These random variates can be used
for variance reduction techniques.
Expected number of uniform random numbers. Let
j. Then we need (1
for generating one ratio v=u. Since we have to reject this ratio if (v; u) 62 A and
we find for the expected number of uniform random numbers per generated
non-uniform Notice that by a proper choice of the
construction points, % can be made arbitrarily small.
Bounded domain for g. If x than the situation is nearly the
same. We have to distinguish between two cases:
exists for the limit point x i . We then use x i as construction
point and the respective triangular segment S 0 or Sn is not necessary.
(2) Otherwise we can restrict the triangular segment S 0 or Sn , i.e. we use the
tangent line instead of the v-axis. Notice
that we then have different tangent lines at c 0 for S 0 and Sn .
Adding a construction point. To add a new point for a given ratio
we need (c on the "outer boundary" of A and the tangent line of A at this
point. These are given by positive root of u and the total differential of
tangent: a
where a
3. Ratio-of-uniforms and transformed density rejection
Transformed density rejection. One of the most efficient universal methods is
transformed density rejection, introduced in Devroye (1986) and under a different
name in Gilks and Wild (1992), and generalized in H-ormann (1995). This accep-
tance/rejection technique uses the concavity of the transformed density to generate
a hat function and squeezes automatically by means of tangents and secants. The
user only needs to provide the density function and perhaps the (approximate) location
of the mode. It can be utilized for any density f where a strictly increasing,
differentiable transformation T exists, such that T (f(x)) is concave (see H-ormann
(1995) for details). Such a density is called log-concave densities are an
example with T log(x). Figure 3 illustrates the situation for the standard normal
distribution and the transformation T log(x). The left hand side shows
the transformed density with three tangents. The right hand side shows the density
function with the resulting hat. Squeezes are drawn as dashed lines. Evans and
Swartz (1998) has shown that this technique is even suitable for arbitrary densities
provided that the inflection points of the transformed density are known.
6 JOSEF LEYDOLD
Figure
3. Construction of a hat function for the normal density
utilizing transformed density rejection.
Densities with convex region A. Stadlober (1989) and Dieter (1989) have
clarified the relationship of the ratio-of-uniforms method to the ordinary accep-
tance/rejection method. But there is also a deeper connection to the transformed
density rejection, that gives us a useful characterization for densities with convex
region A g . We first provide a proof of theorem 1.
Proof of theorem 1. Consider the transformation
R \Theta (0; 1)
Since the Jacobian of this transformation is 2, the joint density probability function
of X and Y is given by w(x;
otherwise. Thus X has marginal density w 1
R g(x)
Consequently
R
g(x)dx and w 1
probability density function f(x).
Transformation (5) maps A g one-to-one onto B
i.e. the set of points between the graph of g(x) and the x-axis. Moreover
the "outer boundary" of A g , f(v; u) :
mapped onto the graph of g(x).
Theorem 2. A g is convex if and only if g(x) is T -concave with transformation
x.
Proof. Since
x is strictly monotonically increasing, the transformation
one-to-one onto C
i.e. the region below the transformed density. Hence by T
R \Theta (0; 1)
maps A g one-to-one onto C g . Notice that g is T -concave if and only if C g is con-
vex. Thus it remains to show that A g is convex if and only if C g is convex, and
consequently that straight lines remain straight lines under transformation (6).
Let a x+b y = d be a straight line in C g . Then a
i.e. a straight line in A g . Analogously we find for a straight line a
the line a x .
Remark 3. By theorem 2 the new universal ratio-of-uniforms method is in some
sense equivalent to transformed density rejection. It is a different method to generate
points uniformly distributed in the region below the hat function. But in
opposite to the new method transformed density rejection always needs more than
two uniform random numbers. A similar approach for the transform density re-
jection, i.e. decomposing the hat function into the squeeze (region of immediate
acceptance) and the region between squeeze and hat, does not work well. Sampling
from the second part is very awkward and prone to numerical errors (H-ormann
1999).
Since every log-concave density is T -concave with T
x (H-ormann
1995), our algorithm can be applied to a large class of distributions. Examples are
given in table 1. The given conditions on the parameters imply T -concavity on the
support of the densities. However the densities are T -concave for a wider range
of their parameters on a subset of their support. E.g. the density of the gamma
distribution with -concave for all a ? 0 and x -
Distribution Density Support T -concave for
Normal e \Gammax 2 =2 R
Log-normal 1=x exp(\Gamma
pExponential - e \Gamma- x [0;
Beta x
Weibull x
Perks 1=(e x
Gen. inv. Gaussian x
Pearson VI x
Planck x a =(e
Burr x
Snedecor's F x
Table
1. T -concave densities (normalization constants omitted)
4. Construction points
The performance of the new algorithm depends on a small ratio of
and thus on the choice of the constructions points for the tangents of
the enveloping polygon. There are three possible solutions: (1) simply choose
equidistributed points, (2) use an adaptive method, or (3) use optimal points. It
is obvious that setup time is increasing and marginal generation time is decreasing
from (1) to (3) for a given number of construction points.
Equidistributed points. The simplest method is to choose points x
equidistributed angles:
If the density function has bounded domain, (7) has to be modified to
where tan(' l ) and tan(' r ) are the left and right boundary of the domain (see also
section 2).
Numerical simulations with several density functions have shown that this is an
acceptable good choice for construction points for several distributions where the
ratio of length and width of the minimal bounding rectangle is not too far from
one.
Adaptive rejection sampling. Gilks and Wild (1992) introduces the ingenious
concept of adaptive rejection sampling for the problem of finding appropriate construction
points for the tangents for the transformed density rejection method.
Adopted to our situation it works in the following way: Start with (at least) two
points on both sides of the mode and sample points from the enveloping polygon
P e . Add a new construction point at
stopping criterion is fulfilled, e.g. the maximal number of
construction points or the aimed ratio jP s j=jP e j is reached. To ensure that the
starting polygon P e is bounded, a construction point at (or at least close to) the
mode should be used as a third starting point.
Sampling a point in the domain P e n P s is much more expensive than sampling
from the squeeze region. Firstly the generation of a random point requires more
random numbers and multiplications; secondly we have to evaluate the density
and check the acceptance condition. Thus we have to minimize the ratio
which is done perfectly well by adaptive rejection sample, since by
this method the region A is automatically approximated by envelope and squeeze
polygon. The probability for adding a new point in a segment S i depends on
the ratio jS
i.e. from the probability to fall into S
. Hence the adaptive
algorithm tends to insert a new construction point where it is "more necessary".
Obviously the ratio %n is a random variable that converges to 0 almost surely
when the number construction points n tends to infinity. A simple consideration
(Leydold and H-ormann 1998). Figure 4 shows the result of a
simulation for the standard normal distribution with (non optimal) starting points
at samples). %n is plotted against the number n of construction
points. The range of %n is given by the light shaded area, 90%- and 50%-percentiles
are given by dark shaded areas, median by the solid line.
1:0
Figure
4. Convergence of the ratio for the
standard normal distribution with starting points at
(50 000 samples)
We have run simulations with other distributions and starting values and have
made the observation that convergence is even faster for other (non-normal) dis-
tributions. However analytical investigations are interesting. Upper bounds for
expected value of %n are an open problem.
Optimal construction points. By theorem 2 the area between hat and squeeze
of the transformed density rejection method is mapped one-to-one and onto the
AUTOMATIC SAMPLING WITH THE RATIO-OF-UNIFORMS METHOD 9
Thus we can use methods for computing optimal construction points for
transformed density rejection for finding optimal envelopes for the new algorithm.
If only three construction points are used, see H-ormann (1995). If more points are
required, Derflinger and H-ormann (1998) describe a very efficient method. However
some modification are necessary. Improvements over adaptive rejection sampling
are rather small and can be seen in figure 4 (The lower boundary of the range gives
a good estimate for the optimal choice of construction points.
5. The Algorithm
Algorithm arou consists of three main parts:
(1) Construct the starting enveloping polygon P e and squeeze polygon P s in routine
arou start. Here we have to take care about a possibly bounded domain
and the two cases described in section 2. The starting points must be provided
(e.g. by using equidistributed points as describes in section 4).
(2) Sample from the given distribution in routine arou sample.
(3) Add a new construction point with routine arou add whenever we fall into
We store the envelope into a list of segments (object 1). When using this algorithm
we first have to initialize the generator by calling arou start. Then sampling can
be done by calling arou sample.
object 1 segment
parameter variable definition / remark
left construction point c i
right construction point c i+1 pointer, stored in next segment
tangent at left point a
tangent at right point a i+1 pointer, stored in next segment
intersection point m i
area inside/outside squeeze A in
, A out
accumulated area A cum
fast inversion
Algorithm arou start
Require: density f(x), derivative f 0 (x);
domain
2: a 0 / (cos(arctan(x tangent line for So
3: a k+1 / (cos(arctan(x k tangent line for S k
4: for
5: if f(x
7: a i;v / \Gammaf 0
8: add S i to list of segments.
cannot be used as construction point
9: for all segments S i do
10: insert c i+1 and a i+1 . = already stored in next segment in list
12: compute A in
i and A cum
13: check if polygon P e is bounded.
14: return list of segments.
Algorithm arou sample
Require: density f(x), list of segments S i .
1: loop
2: generate R - U(0; 1).
3: find smallest i such that A cum
use guide table
4: R / A cum
5: if R - A in
return
7:
9: generate R 2 - U(0; 1).
10: if R 1
14: if number of segments ! maximum then
15: call arou add with X , S i .
17: return X .
Algorithm arou add
Require: density f(x), derivative f 0 (x); new construction point xn ; segment S r .
1: if f(xn
cannot add this point
2: return
3: c n;2 /
4: a n;v / \Gammaf 0
5: insert Sn into list of segments. = Take care about c i+1 and a
remove old segment S r from list.
7: compute mn .
8: compute A in
and A out
9: for all segments S i do
10: compute A cum
11: return new list of segments.
To implement this algorithm, a linked list of segments is necessary. Whenever
A cum
are (re-)calculated, a guide table has to be made. Using linear search might
be a good method for finding S i when only a few random variates are sampled.
Special care is necessary when m i is computed in arou start and arou add.
There are three possible cases for numerical problems when solving the corresponding
linear equation:
(1) The vertices c i and c i+1 are very close and (consequently) jS i j is very small.
Here we simply reject c i+1 as new construction point.
are very close to c very small.
(3) The boundary of A between c i and c i+1 is almost a straight line and A out
is
(almost) 0. In this case we set m
A possible way to define "very small" is to compare such numbers with the smallest
positive " with (M in the used programming language. M denotes the
magnitude of the maximum of the density function. (In ANSI C for
defined by the macro DBL EPSILON.)
It is important to check whether m i is on the outer side of the secant through
c i and c i+1 . This condition is violated in arou start when the polygon P e is
unbounded. It may be violated in arou start and arou add when A is not convex.
A C implementation. A test version of algorithm arou is coded in C and available
by anonymous ftp (Leydold 1999) or by email request from the author.
6. A note on the quality of random numbers
The new algorithm is a composition method, similar to the acceptance-complement
method (see Devroye (1986, x II.5)). We have
s (x) is the distribution defined by the squeeze region and g
By theorem 1 the algorithm is exact, i.e. the generated random variates have the
required distribution. However defects in underlying uniform random number generators
may result in poor quality of the non-uniform random variate. Moreover
the transformation into the non-uniform random variate itself may cause further
deficiencies.
Although there is only little literature on this topic, the ratio-of-uniforms method
in combination with any linear congruental generator (LCG) was reported to have
defects (H-ormann 1994a; H-ormann 1994b). Due to the lattice structure of random
pairs generated by an LCG there is always a hole without a point with probability
of order 1=
is the modulus of the LCG.
Random variates generated by the inversion inherit the structure of the underlying
uniform random numbers and consequently their quality. We consider this
as a great advantage of this method, since generators whose structural properties
are well understood and precisely described may look less random, but those that
are more complicated and less understood are not necessarily better. They may
hide strong correlations or other important defects. should avoid generators
without convincing theoretical support. This statement by L'Ecuyer (1998) on
building uniform random number generator is also valid for non-uniform distribu-
tions. Other methods may have some hidden inferences, which make a prediction
of the quality of the resulting non-uniform random numbers impossible (Leydold,
Leeb, and H-ormann 1999).
Notice that a random variate with density g s (x) is generated by inversion. Thus
as ratio % tends to 0, most of the random variates are generated by inversion by the
new algorithm. As an immediate consequence for small % the new generator avoids
the defects of the basic ratio-of-uniforms method. Figure 5 shows scatter plots
of all overlapping tuples using the "baby" generator
(a) shows the underlying generator. (b)-(f) show
the tuples (\Phi(u 0 different number
of construction points using the equidistribution method (\Phi denotes the cumulated
distribution function of the standard normal distribution).
We have made an empirical investigation using M-Tupel tests (Good 1953;
Marsaglia 1985) in the setup of Leydold, Leeb, and H-ormann (1999) with the standard
normal distribution and various numbers of construction points. We have
used a linear congruential generator fish by Fishman and Moore (1986), an explicit
inversive congruential generator (Eichenauer-Herrmann 1993), and a twisted
GFSR generator (tt800 by Matsumoto and Kurita (1994)); at last the infamous
randu (again an LCG) as an example of a generator with bad lattice structure (see
Park and Miller (1988)). These tests have demonstrated that for small ratio %,
the quality of the normal generators are strongly correlated with the quality of the
underlying uniform random number generator. Especially, using randu results in
normal generator of bad quality. Notice however that this correlation does not exist,
if % is not close to 0. Indeed, using only 2 or 4 construction points results in a normal
generator which might be better (e.g. fish in our tests) or worse (e.g. randu)
than the underlying generator.
(a) uniform (b)
(c) r
Figure
5. Scatter plots of "baby" generator
mod 1024 (a) and of normal variates using algorithm arou with 2,
4, 6, 29 and 75 equidistributed construction points (b-f).
7. Possible Variants
Non-convex region. The algorithm can be modified to work with non-convex
region A f . Adapting the idea of Evans and Swartz (1998) we have to partition A f
into segments using the inflection points of the transformed density with transformation
x. In each segment of A f where T (f(x)) is not concave, we
have to use secants for the boundary of the enveloping polygon P e and tangents for
the squeeze P s . Notice that the squeeze region in such a segment is a quadrangle
and has to be triangulated.
Multivariate distributions. Wakefield, Gelfand, and Smith (1991) and Stef-anescu
and V-aduva (1987) have generalized the ratio-of-uniforms method to multivariate
distributions. Both use rejection from an enclosing multidimensional rectangle.
However the acceptance probability decreases very fast for higher dimension. For
multivariate normal distribution in four dimension it is below 1%. Using polyhedral
envelopes similar to Leydold and H-ormann (1998) or Leydold (1998) is possible
and increases the acceptance probability. However this requires some additional re-search
Acknowledgements
The author wishes to thank Hannes Leeb for helpful discussions on the quality
of random number generators.
--R
Computer methods for efficient sampling from largely arbitrary statistical distributions.
Random variate generation by using the ratio-of-uniforms method
A Guide to Simulation.
On generating random variates from an empirical distribution.
Some simple gamma variate generators.
Principles of Random Variate Generation.
The optimal selection of hat functions for rejection algorithms.
A simple algorithm for generating random variates with a log-concave density
Mathematical aspects of various methods for sampling from classical distributions.
Statistical independence of a new class of inversive congruential pseudorandom numbers.
Random variable generation using concavity properties of transformed densities.
see erratum
Adaptive rejection sampling for Gibbs sampling.
Applied Statistics
The serial test for sampling numbers and other tests for randomness.
A rejection technique for sampling from T-concave distri- butions
private communication.
Transformations and random variate gen- eration: Generalised ratio-of-uniforms methods
Computer generation of random variables using the ratio of uniform deviates.
Random number generation.
A fast normal random number generator.
A rejection technique for sampling from log-concave multi-variate distributions
AROU user manual.
A sweep-plane algorithm for generating random tuples in simple polytopes
Higher dimensional properties of non-uniform pseudo-random variates
A current view of random number generators.
Twisted GFSR generators II.
Random number generators: good ones are hard to find.
Random number generation for the normal and gamma distributions using the ratio of uniforms method.
The ratio of uniforms approach for generating discrete random variates.
On computer generation of random vectors by transformations of uniformly distributed vectors.
Efficient generation of random variates via the ratio-of-uniforms method
University of Economics and Business Administration
--TR
An exhaustive analysis of multiplicative congruential random number generators with modulus 2<supscrpt>31</>-1
A guide to simulation (2nd ed.)
On computer generation of random vectors by transformation of uniformly distributed vectors
Random number generators: good ones are hard to find
Mathematical aspects of various methods for sampling from classical distributions
The ratio of uniforms approach for generating discrete random variates
A fast normal random number generator
A note on the quality of random variates generated by the ratio of uniforms method
Twisted GFSR generators II
A rejection technique for sampling from <italic>T</italic>-concave distributions
A rejection technique for sampling from log-concave multivariate distributions
A sweep-plane algorithm for generating random tuples in simple polytopes
Computer Generation of Random Variables Using the Ratio of Uniform Deviates
--CTR
Leydold , Gerhard Derflinger , Gnter Tirler , Wolfgang Hrmann, An automatic code generator for nonuniform random variate generation, Mathematics and Computers in Simulation, v.62 n.3-6, p.405-412, 3 March
John T. Kent , Patrick D. L. Constable , Fikret Er, Simulation for the complex Bingham distribution, Statistics and Computing, v.14 n.1, p.53-57, January 2004
Josef Leydold, Short universal generators via generalized ratio-of-uniforms method, Mathematics of Computation, v.72 n.243, p.1453-1471, July
Josef Leydold, A simple universal generator for continuous and discrete univariate T-concave distributions, ACM Transactions on Mathematical Software (TOMS), v.27 n.1, p.66-82, March 2001
Wolfgang Hrmann , Josef Leydold, Continuous random variate generation by fast numerical inversion, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.13 n.4, p.347-362, October
W. Hrmann , J. Leydold, Random-number and random-variate generation: automatic random variate generation for simulation input, Proceedings of the 32nd conference on Winter simulation, December 10-13, 2000, Orlando, Florida
Dong-U Lee , Wayne Luk , John D. Villasenor , Peter Y. K. Cheung, A Gaussian Noise Generator for Hardware-Based Simulations, IEEE Transactions on Computers, v.53 n.12, p.1523-1534, December 2004 | t-concave;rejection method;ratio of uniforms;universal method;nonuniform;adaptive method;random-number generation;log-concave |
348028 | Power optimization of technology-dependent circuits based on symbolic computation of logic implications. | This paper presents a novel approach to the problem of optimizing combinational circuits for low power. The method is inspired by the fact that power analysis performed on a technology mapped network gives more realistic estimates than it would at the technology-independent level. After each node's switching activity in the circuit is determined, high-power nodes are eliminated through redundancy addition and removal. To do so, the nodes are sorted according to their switching activity, they are considered one at a time, and learning is used to identify direct and indirect logic implications inside the network. These logic implications are exploited to add gates and connections to the circuit; this may help in eliminating high-power dissipating nodes, thus reducing the total switching activity and power dissipation of the entire circuit. The process is iterative; each iteration starts with a different target node. The end result is a circuit with a decreased switching power. Besides the general optimization algorithm, we propose a new BDD-based method for computing satisfiability and observability implications in a logic network; futhermore, we present heuristic techniques to add and remove redundancy at the technology-dependent level, that is, restructure the logic in selected places without destroying the topology of the mapped circuit. Experimental results show the effectiveness of the proposed technique. On average, power is reduced by 34%, and up to a 64% reduction of power is possible, with a negligible increase in the circuit delay. | INTRODUCTION
Excessive power dissipation in electronic circuits reduces reliability and battery
life. The severity of the problem increases with the level of transistor integra-
tion. Therefore, much work has been done on power optimization techniques at
all stages of the design process. During high-level design, power dissipation can
be reduced through algorithmic transformations [Chandrakasan et al. 1995], architectural
choices [Chandrakasan and Brodersen 1995], and proper selection of the
high-level synthesis tools [Macii et al. 1997]. At the logic level-the focus of this
paper-the main objective of low-power synthesis algorithms is the reduction of
the switching activity of the logic, weighted by the capacitive load. Logic optimization
may occur at both the technology-independent and the technology-dependent
stages of the synthesis flow. At the technology-independent stage, combinational
circuits are optimized by two-level minimization [Bahar and Somenzi 1995; Iman
and Pedram 1995b], don't care based minimization [Shen et al. 1992; Iman and
Pedram 1994], logic extraction [Roy and Prasad 1992; Iman and Pedram 1995a],
and selective collapsing [Shen et al. 1992]. At the technology-dependent stage, technology
decomposition [Tsui et al. 1993] and technology mapping [Tsui et al. 1993;
Tiwari et al. 1993; Lin and de Man 1993] methods have been proposed. Finally,
after an implementation of the circuit is available, power can still be reduced by
applying technology re-mapping [Vuillod et al. 1997] and gate resizing [Bahar et al.
1994; Coudert et al. 1996].
It is difficult to measure the power dissipation of technology-independent circuits
with a dependable level of accuracy. Therefore, we propose a method that can be
applied to technology mapped circuits, and that is based on the idea of reducing the
total switching activity of the network through redundancy addition and removal.
Previous work on this subject includes the methods proposed in [Cheng and
Entrena 1993; Entrena and Cheng 1993; Chang and Marek-Sadowska 1994], in
which a set of mandatory assignments is generated for a given target wire. A set of
candidate connections is then identified. Each candidate connection, when added
to the circuit, causes the target fault to become untestable and therefore the faulty
connection to become redundant. However, since the additional connection may
change the circuit's behavior, a redundancy check is needed to verify that the new
connection itself is redundant before it may be added to the circuit.
Another ATPG-based approach was proposed in [Rohfleish et al. 1996]. That
technique uses an analysis tool introduced in [Rohfleish et al. 1995] to identify permissible
transformations on the network that may reduce power dissipation. The
method for finding permissible transformations is simulation-based; implications
are classified into C1-, C2-, and C3-clauses and bit-parallel fault simulation is performed
to eliminate most of the clauses that are invalid. The remaining potentially
valid clauses are combined to create different clause combinations, each of which
is checked for validity using ATPG. As more complex clauses are included, the
number of combinations to consider can increase dramatically.
Other work in the area of redundancy addition and removal uses recursive learning
to guide in the process. For instance, the work proposed in [Kunz and Menon
1994] introduces an ATPG-based method for identifying indirect implications, which
may indicate useful transformations of a circuit. Once an implication is identified,
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 3
it may be used to add a redundant connection, which is guaranteed not to change
the behavior of the circuit. An additional redundancy elimination step is required
to identify what other redundant connections, if any, have been created by adding
this new connection.
In our method, we start from a circuit that is already implemented in gates from
a technology library, and we perform power analysis on it, so as to identify its
high and low-power dissipating nodes. We use a sophisticated learning mechanism
(related to those of [Trevillyan et al. 1986; Kunz and Pradhan 1992; Kunz 1993;
Jain et al. 1995]) to find satisfiability and observability implications in the circuit
in the neighborhood of the target nodes. Such implications are used to identify
network transformations that add and remove connections in the circuit (as done
in [Kunz and Menon 1994]), with the objective of eliminating the high-power nodes
or connections from them. The method is innovative in two main aspects; first,
it uses a powerful learning procedure based on symbolic calculations rather than
ATPG-based methods. This approach allows the identification of very general
forms of logic implications; second, it operates at the technology-dependent level;
this allows more accurate power estimates to drive the overall re-synthesis process.
Experimental results, obtained on a sample of the Mcnc'91 benchmarks [Yang 1991],
show the viability and the effectiveness of the proposed approach.
The rest of this manuscript is organized as follows. Section 2 gives definitions for
subsequent usage. In Section 3 we propose a symbolic procedure to compute logic
implications using learning. Section 4 describes the power optimization procedure
based on redundancy addition and removal. Section 5 is dedicated to experimental
results, and finally, Section 6 gives conclusions and directions for future work.
2. BACKGROUND
In this section we provide definitions of terms and introduce concepts to be used
in the rest of the paper. We first define characteristic functions and relations;
next, we discuss the concept of untestable faults and show how these faults may be
eliminated through redundancy removal. Finally, we show how logic implications
may be used to create new untestable faults in a circuit that may be subsequently
removed to create an overall better optimized circuit.
2.1 Characteristic Functions and Relations
Given a set of points S in the Boolean space, is possible to define
a function called the characteristic function of S, that evaluates to
1 exactly for the points of B n that belong to S. Formally:
This definition can be extended to arbitrary finite sets, provided that the objects
in the set are properly encoded with binary symbols.
Since characteristic functions are Boolean functions, they can be represented
and manipulated very efficiently through binary decision diagrams (BDDs) [Bryant
1986]. As a consequence, it is usually possible to handle much larger sets if the
BDDs of their characteristic functions are used instead of an explicit enumeration
of all the elements in the sets.
In this paper, we restrict our attention to relations, that is, to sets which are
subsets of some Cartesian product. Let S and Q be two sets, and let R ' S \Theta Q
be a binary relation (i.e., the elements of R are pairs of elements from S and Q).
Using different sets of variables,
the elements of S and Q, we can represent this relation through its characteristic
As an example, consider sets greeng and
GREY; RED; ORANGEg, and relation lower case(q)g. Let us
encode the elements of S using variables
Similarly, the elements of Q are encoded using variables
as:
of relation R is then:
R
That is, RED)g. Obviously, the definition of binary relation given
above can be easily extended to the case of n-ary relations, that is, relations which
are subsets of Cartesian products of order n.
2.2 Circuits and Faults
A combinational circuit, C, is an acyclic network of combinational logic gates. If
the output of a gate, g i , is connected to an input of a gate, g j , then g i is a fanin of
j and gate g j is a fanout of gate g i .
A combinational circuit may have a failure due to a wire being shorted to the
power source or ground. Such a failure may be observed as a stuck-at fault. That
is, under the failure, the circuit behaves as if the wire were permanently stuck-at-1
or stuck-at-0. We assume that single stuck-at faults are used to model failures in a
circuit. Let C be a combinational circuit, and let C f be the same circuit in which
fault f is present. Fault f is untestable if and only if the output behaviors of C
and C f are identical for any input vector applied to both C and C f .
2.3 Redundancy Addition and Removal
Any automatic test pattern generation program may be used to detect untestable
faults (e.g., [Sentovich et al. 1992]). The computed information may be used
to simplify the network by propagating the constant values (zero or one), due to
untestable stuck-at connections, throughout the circuit.
Redundancy removal is one of the most successful approaches to logic optimization
(e.g., [Cho et al. 1993]). However, its effectiveness greatly depends on the
number of redundancies: For circuits that are 100% testable, redundancy removal
does not help. For this reason, techniques based on redundancy addition and removal
have been proposed.
The concept of redundancy addition and removal is best explained through an
example. In Figure 1(a) all stuck-at faults are testable. Therefore, no simplification
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 5
through redundancy removal is possible on the network as is. If gate G10 is transformed
into a 2-input NOR gate and the additional input is connected from the
inverted output of gate G6 (see shaded logic in Figure 1(b)), then the behavior of
the circuit at its primary outputs will remain unaltered; however, three previously
testable faults (shown with X's in Figure 1(b)) now become untestable. Through
redundancy removal, gate simplification, inverter chain collapsing, and DeMorgan
transformations, the circuit can be simplified as shown in Figure 1(e).
a
c
e
G6
G4 G8 G9 G10
d
x
y
a
c
d
e
G6
G4 G8 G9 G10
x
y
a
c
e
G6
d
x
y
G6
a
c
e
G6
G4
d
x
y
(a) (b)
(c)
(d)
a
c
d
e
x
y
Fig. 1. Example of Redundancy Addition and Removal.
Adding redundant gates and connections to a circuit may increase area, delay,
and power consumption by an amount that may not be recoverable by the subsequent
step of redundancy removal. Furthermore, not all redundancies in a network
are necessarily due to sub-optimal design; automatic synthesis and technology mapping
tools sometimes resort to redundant gates insertion to increase the speed of
a digital design [Keutzer et al. 1991]. As a consequence, redundancy addition and
removal are delicate operations that should be performed within the constraints of
the objective function being minimized.
6 \Delta R. I. Bahar, E. T. Lampe, and E. Macii
2.4 Logic Implications
In general, there may be a large variety of choices available in selecting the new
connections (and logic gates) that may be added to the original circuit so as to
introduce redundancies. Kunz and Menon have proposed an effective solution for
selecting these connections through a method derived from recursive learning [Kunz
and Menon 1994]. Recursive learning is the process of determining all value assignments
necessary for detection of a single stuck-at fault in a combinational circuit.
This process is equivalent to finding direct and indirect implications in the circuit,
that is, finding all the value assignments necessary for a given signal to take on
a specific value (satisfiability implications) or to make a given signal observable
(observability implications).
2.4.1 Direct Implications. If a value assignment can be determined by simple
propagation of other signal values through a circuit, then this is known as a direct
satisfiability implication. For example, consider the circuit in Figure 2, where the
signal assignment been made. This assignment directly implies the
value assignments as these are the only assignments for G7
and G8 that will justify the output of gate G9 to a value of 1.
a
c
d
e
G6
G4 G8 G9 G10
x
y
Fig. 2. Example Circuit with Satisfiability Implications.
Similarly, if a value assignment such that a given node is observable can be
determined by simple propagation of other signal values through a circuit, then
this is known as a direct observability implication. For example, consider the circuit
in
Figure
3. For G2 to be observable, are the only assignments
for G1 and G6 that will make gate G2 observable. That is, the observability of G2
directly implies
a
c
e
d G3
G4
G6
y
Fig. 3. Example Circuit with Observability Implications.
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 7
2.4.2 Indirect Implications. It has been shown in [Kunz and Menon 1994] that
only resorting to direct implications to perform redundancy addition and removal
may not provide enough options to achieve significant improvements on the circuit
being optimized. It is thus of interested to look at another type of implications,
namely indirect implications.
To illustrate the concept of indirect implication, consider again the circuit of
Figure
2, and suppose that the signal assignment has been made. We may
optionally assign either to justify this assignment; however,
neither assignment is essential. We therefore conclude that there are no essential
direct implications we can make from this assignment. However, upon closer in-
spection, we can determine that the value assignment indirectly implies
the value assignment 1. If we temporarily assign
are both essential to satisfy G7 = 1. Likewise, by temporarily assigning
we find that are essential assignments.
In either case, is an essential assignment; we can conclude that
an essential assignment for, and is indirectly implied by,
The indirect implication shown in the previous example falls in the category of
satisfiability implications. As an example of an indirect observability implication,
consider again the circuit in Figure 3. For G3 to be observable, must be
true (and vice versa). Therefore, the observability of G3 indirectly implies
Satisfiability implications have a bi-directional property, which does not apply
to observability implications. For example, for the circuit shown in Figure 2 it
can be shown that the satisfiability implication can be
reversed and complemented so that
is a general property of satisfiability implications. However, this property does not
hold for observability implications. Referring to the previous example of Figure 3,
although the observability of G2 (that is, both
the observability of G1 does imply does not
imply This is because it cannot be assumed that G1 and G2 are always
observable under the same conditions. This lack of bi-directionality makes adding
redundant logic to the circuit more constrained when observability rather than
satisfiability implications are used. For this reason, implications of the two types
must be handled separately when applying optimizations based on redundancy
addition and removal.
In [Kunz and Menon 1994], Kunz and Menon have observed that the presence
of indirect implications is a good indication of sub-optimality of a circuit. This is
especially true for satisfiability implications. We have taken inspiration from this
approach to implement the power optimization algorithm we propose in this paper.
However, certain implications should not be eliminated from consideration simply
because they were found through direct propagation of logic values. In fact, most
observability implications are found "directly", since they are determined primarily
through forward propagation of implications. In the next section we discuss how
direct, as well as indirect implications can be computed symbolically using BDD-based
data structures. Then, in Section 4, we outline the overall optimization
procedure.
8 \Delta R. I. Bahar, E. T. Lampe, and E. Macii
3. COMPUTING IMPLICATIONS SYMBOLICALLY
In this section, we introduce our symbolic procedure to compute logic implications
through recursive learning. In what follows, a literal is either a variable or its
complement; a cube is a product of literals.
We start with a set of relations T
a universe of n Boolean variables. Each T j can be thought of as a characteristic
function y for the gate j describing its functional behavior. For example,
if j is a NAND gate with inputs y i and
j is a NOR gate, then T If the Boolean variables
are assigned in such a way that T j evaluates to 0, then the variable
assignments violate the required behavior of the gate and are invalid. In this way,
an entire network may be described within this set fT j g.
may also be used to express the observability relation for a gate. For instance,
consider gate G4 in Figure 2. For the signal at the output of gate G4 to be observable
at either primary output x or y,
and T sat
ce +G4(ce) 0 .
Now, if we are given an initial assignment (i.e., assertion A(y)), we may compute
its implications by applying A(y) to the set fT j g:
Y
Furthermore, we may wish to extract only the necessary, or essential, literals from
the implication I(y):
For example, if the assertion applied to the characteristic function
3 ), the resulting implication becomes I = y 3 (y 1 +y 2 ).
However, in order to satisfy the characteristic function given this assertion, only
strictly necessary). Therefore,
the implication y stored separately in c(y).
In this way, given an initial assertion A(y), c(y) is a list of essential gate assignments
over the entire network. Implemented using BDDs, c(y) is represented as a
single cube. As it will be shown later, essential literals require simple redundant
logic to be added to the network, and therefore it may be beneficial to store them
separately.
3.1 Direct Satisfiability Implications
Given a set of characteristic functions, T
g, and an initial assertion, A,
we compute all direct satisfiability implications using the procedure impSatDirect
shown in Figure 4. Recall that a direct implication is one that can be found by propagating
the immediate effects of the logic assertion forward and backward through
the specified set of relations T , without case analysis. The essential implications
are returned separately in the cube imp. In our implementation, T Sat , A, and imp
are all represented as BDDs.
The direct implications are computed as follows. First, we initialize the list of
implications to be the cube-free, or essential, part of A. Next, we consider all
possible direct implications, one at a time, on the gates from the set Q (line 2).
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 9
procedure impSatDirect(A,T
Sat
1.
2.
k is fanout of some x
3. while (Q 6=
4. select and remove one(Q);
5. T Sat
Sat
6. if (T Sat
7.
8. if (t j one) continue;
9.
is a fanout of some x
return(imp,T Sat );
Fig. 4. Procedure impSatDirect.
The set Q contains all fanouts of variables in the support of the cube imp, plus all
the variables in imp itself.
The first step of the while loop selects one gate from Q, and removes it from
the set. Inside the loop (lines 4 to 10), procedure Essential is called, according to
Equation 4, to discover any new essential implication. We exit the loop and return
(line
k reduces to zero, implying that the assertion A is logically
inconsistent with one of the relations in the set of sub-relations fT Sat
g.
The literal function t (line 7) represents newly discovered essential implications.
On each pass through the while loop, these new implications are added to the
global implications cube imp (line 9), thereby accumulating all the implications of
the original assertion A. As these implication variables are found, they are also
appended to the set Q (line 10) in order to evaluate their effect on the rest of the
network.
In addition to the cube imp, the procedure impSatDirect also returns T Sat ,
which has now become a reduced set of relations comprised of the cofactor of each
original relation with respect to the essential implications imp (that is, T Sat j imp ).
Notice that T Sat j imp is itself a set of implications, albeit non-essential ones. These
more general implications may also be useful for optimizing a network, though they
are not as straight-forward to apply to the network. This will be discussed in more
detail later in the paper.
3.2 Direct Observability Implications
If observability implications are also to be used in the evaluation of direct im-
plications, procedure impSatDirect may be expanded such that relation T obs
j is
evaluated along with T Sat whenever signal j is on the observability frontier. The
procedure is now renamed impDirect and shown in Figure 5.
Lines 1 to 10 are almost identical to those of procedure impSatDirect. The
observability implications are computed beginning on line 12. If the gate k is on
the frontier, we find new implications in the same way as in impSatDirect, only
this time using the observability characteristic functions, T Obs . What makes the
procedure impDirect(A,T Sat ,T Obs
1.
2.
3.
fkjxk is on frontier and has same fanout as x j 2 impg;
4. while (Q 6=
5. select and remove one(Q);
Sat
7. if (T Sat
8.
9. if (t j one) continue;
11. is a fanout of some x
fkjxk is on frontier and has same fanout as x
/* Begin Observability Calculations. */
12. if (k is on frontier) f
13. T Obs
14. if (T Obs
15.
17. else f
19.
21. if (T Obs
/* Frontier is pushed forward */
22. foreach (s 2
is observable from s) f
26. if
27.
return(imp,T Sat ,T Obs );
Fig. 5. Procedure impDirect.
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 11
code more complicated is in updating the frontier.
If the reduced observability relation T Obs
k evaluates to 1, then the fault is observable
through at least one fanout of k and the frontier should be pushed forward to
include these fanout gates (line 24). Furthermore, if all the fanouts of gate k have
been implied, then k is removed from the frontier (line 27). Finally, if the frontier
ever becomes 0 (i.e., empty), then the assertion is not observable, and never will
be. This case is checked in line 16.
3.3 Indirect (Recursively Learned) Implications
Using the symbolic direct implications procedure from the previous section, we can
find indirect, or recursively learned, implications by setting temporary orthogonal
constraints on the initial assertion, finding implications based on these constraints,
and extracting the common implication as the full set of implications. We define
orthogonal constraints as a set of functions ff
Although we may use any orthogonal set of functions, to simplify the recursive
implication procedure, we use the orthogonal constraints f(y) and f 0 (y).
Say that we extract a function f(y) from the network, and add it to the original
assertion A(y) such that:
We apply each assertion A 1 (y) and A 0 (y) separately to the network using Equation
3 to get two different sets of implications, I 1 (y) and I 0 (y), respectively. For
each variable y j we combine these implications such that:
Y
to obtain the new set of implications (both direct and indirect). That is, we retain
only the implications that are common to both I 1 (y) and I 0 (y). In addition, we may
choose to save the essential implications separately by applying Equation 4 to the
new set of implications. Notice that we may recursively apply new orthogonal constraints
to the set of transition relations to potentially find even more implications.
This is handled easily within the BDD environment, as shown in the pseudo-code
of
Figure
6.
The procedure indirectImps takes, as inputs, the assertion A and the relation
sets T Sat and T Obs . In addition, it takes an input, level, representing the
recursion level of the recursive call, initially set to 0. When level exceeds a specified
limit maxlevel, the search for further implications is abandoned. Procedure
indirectImps returns a cube, impCube, representing the set of all variables (direct
and indirect) that are implied to constant values.
It should be noted that an indirect implication discovered by propagating implications
backwards is often found to be a direct implication by propagating implications
forward. For example, the indirect satisfiability implication found in Figure 2,
can be found as a direct implication 1). This
procedure indirectImps(A,T Sat ,T Obs ,level) f
1. (directImps,T Sat ,T Obs
2. if (8i;(T Sat
3. if (level maxlevel) return(directImps);
4. y
5.
7.
8.
Fig. 6. Procedure indirectImps.
is the same implication by the law of contrapositum. As previously mentioned,
we make the distinction between indirect and direct implications only because it is
often a good way of sorting out the more promising implications. Indeed, we may
not want to eliminate an implication simply because it is not "indirectly obtained".
If implications are found only by backward propagation, this may be a reasonable
filter to use. However, if we are interested in observability-based implications as
well, these can only be found in the forward direction, so using such a filter may
not be a good solution. This point is discussed further in Section 5.
Using BDDs to compute and store indirect implications may seem inefficient
compared to doing a simple analysis of the topology of a circuit. This may in
fact be true if all we are interested in are single-variable implications derived from
satisfiability assignments for single-literal assertions (for example, (y
b) for a; b 2 f0; 1g.) However, by computing the implications symbolically, we
are better suited for finding more general implications. That is, our procedure
can store, manipulate, and compute general (i.e., more complex) expressions with
similar complexity than if expressions were restricted to simple cubes (in fact, by
separating the essential implications from the non-essential ones, we have available
both).
4. POWER OPTIMIZATION PROCEDURE
We now describe our implication-based optimization procedure for reducing power
dissipation. The procedure consists of four main steps, described in detail in the
following Sections 4.1 to 4.4.
4.1 Selecting an Assertion Function and Finding Its Implications
Computing all the indirect implications of a large network, as shown in Section 3,
can be computationally expensive. Therefore, it is important to prune the search
for implications by limiting the recursion level and carefully selecting the assertion
function A(y) upon which the implications are found. To reduce the cost even
farther, in [Bahar et al. 1996] we have proposed to extract a sub-network and find
the implications only within the confines of this sub-network. Note that, although
the implications can be found only within the boundaries of the sub-network, all
implications must hold in the context of the entire network.
Indirect implications are often present specifically in circuits containing recon-
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 13
vergent fanout. Reconvergent fanout is the presence of two or more distinct paths
with a common input gate (or fanout stem), leading to a common output gate,
and with no other gate in common. The gate where the paths reconnect is called
the reconvergence gate. An example of a sub-network with reconvergent fanout is
shown in Figure 7. Two distinct paths from inputs c and d reconverge at gate G9.
The experiments in [Bahar et al. 1996] suggested that, in order to invest the time
finding implications only where it is most useful, the search for indirect implications
should be limited to sub-networks containing reconvergent fanout, where the
reconvergence gate itself is used as the initial assertion A(y). In this way, implications
may be found through (predominantly) backward propagation of signal values
toward the primary inputs.
a
c
d
e
G6
G4 G8 G9 G10
x
y
Reconvergence gate
Fig. 7. A Network of Gates with an Extracted Sub-Network (Shown in Grey).
While the above approach may work well if one is concerned only with satisfiability
implications, it may be too limiting if observability implications are also
to be exploited. Furthermore, as mentioned in Section 3.3, distinguishing between
indirect and direct implications becomes a less useful filter in sorting out the more
promising implications, since many of the observability ones are found through
direct forward propagation of signal values.
Instead of asserting the reconvergent gate, our new strategy selects a gate with
relatively low power dissipation (due to either low switching activity, low capacitive
load, or both). Once a suitable implication is found, the low-power assertion gate
is included in the added redundant logic. Using a low-power assertion gate has
a minimal impact on potentially increasing its own power dissipation (switching
activity and capacitive load are already low). In addition, using this signal as an
input to other gates may have a dampening effect on the switching activity of other
gates. For example, if an AND gate has a high switching activity, then connecting
a signal which tends toward a 0 value most of the time (and switches infrequently)
may prevent the output of the AND gate from switching as frequently. Moreover,
these additions may allow the removal of other high-power connections or gates.
Therefore, although the assertion gate's power is increased, the net result is an
overall decrease in power consumption.
Once an assertion gate is selected, its output value is alternately set to both 0
and 1, and the implication procedure finds any relation which exists given one of
the assertions. Since the assertion gate may exist anywhere within the network (or
14 \Delta R. I. Bahar, E. T. Lampe, and E. Macii
sub-network), values will be propagated both "backward" and "forward" through
the logic.
4.2 Finding the Right Addition
Once we have found the implications for the given assertions on the selected gate,
we can use this information to add gates and/or connections to the circuit while
retaining the behavior of the original one at the primary outputs. We use a method
similar to data flow analysis [Trevillyan et al. 1986] to determine what these modifications
are for a given assertion its implication
where the implication gate y is in the transitive fanin of the assertion gate x.
Consider first the case where This implication can also
be expressed as x 0 Given the function F the implication can be
expressed as the don't care condition, F (x) DC , for F (x). That is, x
We may transform F to ~
F by adding this don't care term to the output of F without
changing the behavior at the primary outputs of the circuit:
~
In other words, the original circuit is modified by ORing the don't care term (i.e.,
the implicant gate y) with the output of gate x. Similarly:
~
For the case instead of using the don't care expression
we use the analogous expression x
transform F to ~
F as:
~
In other words, the circuit is modified by ANDing the implicant gate y with the
output of gate x.
As an example of how this method is applied, refer back to the circuit in Figures
1(a) and (b). We show the additional connection added due to the implication
1). According to the implication, we can modify the function
at the output of gate G9 without changing the behavior of the circuit by inserting
the
in the circuit. Notice that this OR gate added
to the network is "absorbed" by the inverter G11 which now becomes a 2-input
gate. Note that it is essential that the assertion gate not be in the transitive
fanin of the implication, since replacing the function F (x) with ~
F (x) would create
a cyclic network.
4.3 Finding and Removing the Redundancies
Once the redundant circuitry is added, we use the automatic test pattern generation
procedure implemented in SIS [Sentovich et al. 1992] to find the new
redundancies created in the network. Whether implications are found using the
entire network or only within the boundaries of a sub-network, finding and removing
redundancies should be done on the entire network.
We generate a list of possibly redundant connections. Since the newly added gates
are themselves redundant, we need to make sure that they are not included in the
list. The result of redundancy removal is order dependent; removing a redundant
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 15
connection from a network may create new redundancies, and/or make existing ones
no longer redundant. Since our primary objective is reducing power dissipation, we
sort the redundant connections in order of decreasing power dissipation and remove
them starting from the top of the list.
After ATPG, the identified redundant faults can be removed with the ultimate
goal of eliminating fanout connections from the targeted high power dissipating
node. Redundancy removal procedures such as the one implemented in SIS cannot
be used for this purpose for two main reasons. First, optimization occurs through
restructuring of the Boolean network. As a consequence, even if redundancy removal
operates on a technology mapped design, the end result of the optimization
is a technology independent description that requires re-mapping onto the target
gate library. This may lead to significant changes in the structure of the original
network. This is undesirable in the context of low-power re-synthesis, since the
network transformations made during re-synthesis are based on the original circuit
implementation. Second, redundancy removal usually targets area minimization,
and this may obviously affect circuit performance.
We have implemented our own redundancy removal algorithm, which resembles
the Sweep procedure implemented in SIS, but operates on the gates of a circuit
rather than on the nodes of a Boolean network. In addition, it performs a limited
number of transformations. Namely, the procedure (a) simplifies gates whose inputs
are constant, and (b) collapses inverter chains only when the original circuit
structure and performance are preserved.
Gate Simplification:. Three simplifications are applicable to a given gate, G,
when one of its inputs is constant:
(1) If the constant value is a controlling value for G, then G is replaced by a
connection to either V dd or Ground, depending on the function of the gate.
(2) If the constant value is a non-controlling value for G, and G has more than two
inputs, then G is replaced by a gate, ~
G, taken from the library and implementing
the same logic function as G but with one less input.
(3) If the constant value is a non-controlling value for G, and G is a two-input gate,
then G is replaced by an inverter or buffer.
Usually, cell libraries contain several gates implementing the same function, but
differing by their sizes and, therefore, by their delays, loads, and driving capabilities.
We select, as replacement gate ~
G, the gate that has approximately the same driving
strength as the original gate G.
Inverter Chain Collapsing:. Inverter chains are commonly encountered in cir-
cuits, especially in the cases where speed is critical. Collapsing inverters belonging
to these "speed-up" chains, though advantageous from the point of view of area
and, possibly, power, may have a detrimental effect on the performance of the cir-
cuit. On the other hand, the simplification of gates due to redundancy removal may
produce inverter chains that may be easily eliminated without slowing down the
network. We eliminate inverter chains only in the cases where the transformation
does not increase the critical delay of the original circuit. For each inverter, ~
G,
obtained through simplification of a more complex gate, we first check if ~
G belongs
to a chain which can be eliminated. If certain constraints are satisfied, both the
G and the companion inverter in the chain (i.e., the inverter feeding ~
G or
the inverter fed by ~
are removed. In particular, in order to safely remove the
inverter chain:
(1) The first inverter cannot have multiple fanouts.
(2) The load at the output of the inverter chain must not be greater than the load
currently seen on the gate preceding the inverter chain.
The first restriction may be unnecessarily conservative; however, removing it implies
that sometimes extra inverters need to be inserted on some of the fanout branches of
the first inverter, thereby possibly introducing area, power, and delay degradation.
4.4 Choosing the Best Network
Adding redundant gates and connections to a circuit may increase area, delay, and
power consumption by an amount that may not be recoverable by the subsequent
step of redundancy removal. Given an assertion, a network is created for each literal
in the implication cube that we elected to save while running the implication
procedure. (We may choose to eliminate an implication from the list of possible
candidates because it will create a cyclic network or may add connections to an
already high-dissipating node.) This new network is obtained by adding the appropriate
redundant logic according to the chosen implication (Section 4.2) and
finding and removing the newly created redundancies (Section 4.3). Power and
delay estimations are then run on each new network. The best network is selected
from them, and used to replace the existing network. The criteria we have used
to carry out the network selection are based on a combination of delay and power
consumption and are discussed in detail in Section 5.
5. EXPERIMENTAL RESULTS
In this section we present the results obtained by applying our optimization procedure
to some combinational circuits from the Mcnc'91 benchmark suite. Experiments
were run within the SIS environment on a SUN UltraSparc 170 workstation
with 300 MB of memory.
The circuits are initially optimized using the SIS script script.rugged and
mapped for either area (using map) or for delay (using map -n 1 -AFG). The library
used to map the circuits contains NAND, NOR, and inverter gates, each of
which allows up to 4 inputs and 5 drive options. In general, gates with larger drive
strength have larger cell area, however these two values do not increase at the same
rate. After mapping, the method of [Bahar et al. 1994] is used to resize gates with
smaller gates where no circuit delay penalty is incurred. This ensures that any gain
made during the experiment is the result of our optimization procedure and not of
an improperly sized gate. The statistics for these circuits are reported in Table 1.
In particular, the number of gates, the area (in m 2 ), the delay (in nsec), and the
power consumption (in W ) are shown. Power dissipation is estimated using the
simulation method of [Ghosh et al. 1992].
For each set of experiments our optimization procedure was iteratively applied
to the circuits to find implications to be used for redundancy addition and removal.
After this step, gates in the circuit are again resized without increasing the critical
delay.
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 17
Table
1. Circuit Statistics Before Optimization.
Circuit Initial Statistics (Mapping for Area) Initial Statistics (Mapping for Speed)
Gates Area Delay Power Gates Area Delay Power
9sym 159 236176 17.79 629 276 404840 10.95 1743
clip 104 146624 17.43 427 167 249632 12.29 1205
inc
misex1 50 66352 13.79 192 76 111244 9.94 525
alu4 169 234784 19.93 628 247 344056 14.06 1391
cordic 67 90944 11.64 280 85 123192 9.53 474
cps 897 1286208 40.09 1932 1272 1694412 13.68 2772
5.1 Setting Delay and Power Constraints
The first set of experiments were run to determine how to choose a new network
among a choice of several. That is, how to choose the best implication to apply
to the network. As it was done in [Bahar et al. 1996], network choices may be
based solely on which one has the lowest power dissipation with the constraint that
the delay of the new network has not increased by more than a fixed percentage
(usually 5%). A more robust approach may use a combination of delay and power
to select the best network, or may temporarily allow power to increase so that more
powerful implications may be subsequently applied, thus having a greater impact
on reducing power dissipation. These experiments are discussed in the following
sections.
5.1.1 Power Threshold. One optimization already mentioned in Section 4.4 is the
addition of a power threshold. The basis of a power threshold is the observation
that if the difference in power of two networks is small, better results are obtained
by choosing the network with the smaller delay. Then, as a fall back, if the delays
of the two networks are equal, the difference in power (no matter how small) is
used to make the determination. This heuristic takes advantage of the fact that
the final network will be resized based on the delay of the original network. A large
improvement in the delay gives the resizing algorithm the ability to make significant
additional power gains in the layout of the transistors. It also allows transformations
which may not have been accepted previously because they increased the delay too
much.
A set of experiments was done to determine the value of an optimal threshold
value. Tests were done with values of 1.0 to 2.75W at intervals of 0.25W on all
tested circuits for an area mapping. A run was also done at a threshold of 0, which
is a run based on power alone. Figure 8 shows an optimal threshold value of 2W .
This result is reasonable because, as discussed earlier in the paper, the final power
is affected by two variables, power and delay. Also, allowing the power to increase
slightly may create new implications that lead to even greater decreases in power
dissipation.
Total Power vs. Threshold6730677068100.00 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75
Threshold
Total
Power
Fig. 8. Power Dissipation for Given Threshold Value.
5.1.2 Delay Tolerance. Another parameter that can be varied is the delay tol-
erance, which is defined as the allowable percent increase in the delay of the final
circuit from the original circuit. The interesting observation is that raising the delay
tolerance does not always result in a slower final network. This is because there
are often many transformation choices made before the final network is obtained.
Transformations that increase the delay are often offset by other transformations
that decrease it. Yet, the increase in the delay tolerance increases the number of
networks to choose from. In other words, like the power threshold, increasing the
delay tolerance increases the probability that a power saving implication will be
found.
There are several observations which can be made from the data. First, increasing
the delay tolerance does, on average, have the effect of increasing the delay. How-
ever, depending on the circuit, allowing more flexibility with delay per iteration
can allow one to obtain a final circuit that is both lower in power and faster than
the original circuit. Second, greater success was found when testing delay-mapped
circuits compared to the area-mapped circuits. This is reasonable, since the delay
mapped circuits are by definition designed to achieve a minimum delay. From this
extreme, a small sacrifice in delay produces a relatively large power savings over
area mapped circuits.
The results of the experiments are shown in Figures 9 and 10. It can be seen
that for an increase in delay tolerance, the average delay does go up. Also, after
a certain delay tolerance level, it can be seen that further decreases in power are
small to insignificant (on the graphs a delay tolerance of 200 can be interpreted as
infinite.)
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 19
Delay Tolerance Effects630065006700690071007300
Delay Tolerance
Power
and
Normalized
Delay
Total Power
Normalized Delay
Fig. 9. Power/Delay Tradeoff Curves for Varying Delay Tolerance (Circuits Mapped for Area).
Total Power vs. Delay Tolerance105001150012500135001450015500100 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180 185 190 195 200
Delay Tolerance
Total
Power
Normalized
Delay
Total Power
Normalized Delay
Fig. 10. Power/Delay Tradeoff Curves for Varying Delay Tolerance (Circuits Mapped for Speed).
Table
2. Statistics After Redundancy Addition and Removal (Circuits Mapped for Area).
Gates Area Delay Power Imps Obs \DeltaP \DeltaD \DeltaA
9sym 178 241744 18.34 483 98 48 0.77 1.03 1.02
clip 88 117392 15.48 274 43 14 0.64 0.89 0.80
inc 97 127136 22.06 259 67 24 0.73 0.92 1.08
alu4 126 163792 19.38 354 74 28 0.56 0.97 0.70
cordic
Average 0.72 0.96 0.93
Table
3. Statistics After Resizing Gates from Table 2. Changes in power, delay, and area are
given relative to those shown in Table 1, columns 2-5.
Area Delay Power \DeltaP \DeltaD \DeltaA
9sym 241744 18.06 470 0.75 1.02 1.02
clip 117392 15.44 255 0.60 0.89 0.80
inc 127136 22.32 236 0.66 0.93 1.08
rd53 40832 10.93 111 0.74 1.04 0.93
cordic 79344 12.12 197 0.70 1.04 0.87
Average 0.68 0.96 0.93
5.2 Individual Experiments
From the results obtained in the first set of experiments, we now show the individual
power, delay, and area characteristics for each circuit after redundancy addition and
removal and resizing is complete. For all these experiments, the recursion level for
finding implications was limited to 1 (i.e., maxlevel = 1 in Figure 6). Implications
were applied using a power threshold of 2W and a delay tolerance of 5% above
the original circuit delay. As before, after redundancy addition and removal, gates
in the circuit are resized without increasing the critical delay. That is, the critical
delay of the final circuits was never greater than 5% above that reported in Table 1.
Tables
give the final statistics for the circuits. In Tables 2 and 4 we report the
results after applying the redundancy addition and removal (Table 2 starts with
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 21
Table
4. Statistics After Redundancy Addition and Removal (Circuits Mapped for Speed).
Gates Area Delay Power Imps Obs \DeltaP \DeltaD \DeltaA
clip 134 195692 12.50 828 71 24 0.69 1.02 0.78
inc 115 158224 16.09 450 71
misex1 68 94656 10.38 370 43 19 0.70 1.04 0.85
cordic
Average 0.73 1.03 0.83
Table
5. Statistics After Resizing Gates from Table 4. Changes in power, delay, and area are
given relative to those shown in Table 1, columns 6-9.
Area Delay Power \DeltaP \DeltaD \DeltaA
9sym 354844 11.48 1351 0.78 1.05 0.88
clip 170752 12.21 496 0.41 0.99 0.68
inc 146160 16.37 295 0.46 1.00 0.82
misex1 92336 10.27 331 0.63 1.03 0.83
alu4 228288 13.99 690 0.50 1.00 0.66
cordic 109040 9.89 420 0.89 1.04 0.89
Average 0.64 1.02 0.80
circuits mapped for area and Table 4 starts with circuits mapped for speed). The
number of accepted implications is shown in the column labeled Imps. Of these, the
shows how many of them are observability implications. The relative
changes in power, delay, and area are shown in the columns labeled \DeltaP , \DeltaD, and
(e.g., a 0.75 in the \DeltaP column indicates a 25% reduction in power compared
to that given in Table 1). The results after the final step of gate resizing are shown
in
Tables
3 and 5 respectively. Changes in power, delay, and area are given relative
to those shown in Table 1.
Notice that for circuits shown in Tables 2 and 3, the change in area remains
the same before and after gate resizing. Since these circuits are originally mapped
for area optimization, most of the gates were already at or near minimum size.
22 \Delta R. I. Bahar, E. T. Lampe, and E. Macii
Therefore, even after resizing, no additional saving in area is possible, However, a
slight improvement in power and delay is still possible with resizing since, in our
library, cell area for some gates are the same for different drive strengths.
The effectiveness of our method is shown by the presented results. For example,
in the case of mapping for area, a 49% power reduction was obtained for circuit
rd84. On the other hand, in the case of circuits mapped for speed, a 64% power
savings was obtained for benchmark bw; on average a power savings of 34% was
obtained for area and speed mapped circuits combined.
It is interesting to point out the relationship between power reduction and circuit
delay and area. While the circuits averaged a 36% reduction in power, delay increased
by only 2% for speed mapped circuits and decreased by 4% for area mapped
circuits. However, for a few examples, delay decreased significantly. For instance,
in
Table
3, circuit bw mapped for area showed a 31% decrease in power along with a
22% decrease in delay. These results help emphasizing that low power does not need
always to come at the expense of reduced performance. In addition, it is not always
the case that smaller devices must be used to obtain lower-power dissipation. For
example, for circuit C432 in Table 3, area increased by 1%, but power dissipation
decreased by 37%. This result emphasizes the need to consider switching activity
when optimizing for power.
6. CONCLUSIONS AND FUTURE WORK
To be successful, power optimization needs to be driven by power analysis. By
performing power analysis directly on a technology-mapped circuit, we can selectively
target the search for implications to areas (i.e., assertion functions) that
indicate promise in reducing power dissipation. Starting from an appropriate assertion
gate, low-power minimization is achieved through network transformations,
which are based on implications obtained using symbolic, BDD-based computation.
Our method has been shown to obtain up to a 64% decrease in power dissipation
and an average of 34% power reduction for all circuits.
Although all circuits presented in the results were of small to moderate size,
this method can be expanded to larger circuits as well. With larger sized circuits,
however, execution time may increase significantly. The execution bottleneck is not
the BDD-based algorithm to find the implications, but rather the ATPG algorithm
within SIS used to identify redundant connections. Other redundancy identification
and removal techniques, such as those presented in [Iyer and Abramovici 1994] may
be used to alleviate this bottleneck.
As future work, we would like to take advantage of the more general implications
mentioned in our symbolic algorithm. This enhancement should allow us to reduce
power dissipation further. In addition, we are working on identifying more powerful
transformations (other than simple inverter chain collapsing) which may be applied
to the circuit in order to further reduce area and power at no delay cost. These
transformations may include application of DeMorgan's Law or expanding the types
of gates included in our library. For instance, it may be advantageous to collapse,
say, a NAND and an inverter into an AND, or, NAND/inverter clusters into complex
gates. Finally, we are working on refining our method of selecting an assertion gate
and finding a suitable sub-network over which to search for implications.
Power Optimization Based on Symbolic Computation of Logic Implications \Delta 23
ACKNOWLEDGMENTS
We would like to thank Fabio Somenzi and Gary Hachtel for their many helpful
comments and suggestions made on the first draft.
--R
Symbolic computation of logic implications for technology-dependent low-power synthesis
In IEEE International Symposium on Low Power Electronics and Design (August
A symbolic method to reduce power consumption of circuits containing false paths.
Boolean techniques for low-power driven re-synthesis
In IEEE/ACM International Conference on Computer Aided Design (November
Minimizing power consumption in digital cmos circuits.
Optimizing power using transformations.
Perturb and simplify: Multi-level boolean network optimizer
Redundancy identification/removal and test generation for sequential circuits using implicit state enumeration.
New algorithms for gate sizing: A comparative study.
Sequential logic optimization by redundancy addition and removal.
Estimation of average switching activity in combinational and sequential circuits.
Logic extraction and factorization for low power.
Advanced verification techniques based on learning.
Is redundancy necessary to reduce
Hannibal: An efficient tool for logic verification based on recursive learning.
In IEEE/ACM International Conference on Computer Aided Design (November
In IEEE/ACM International Conference on Computer Aided Design (November
Recursive learning: An attractive alternative to the decision tree for test generation in digital circuits.
Reducing power dissipation after technology mapping by structural transformations.
Logic clause analysis for delay opti- mization
Syclop: Synthesis of CMOS logic for low power ap- plications
Sequential circuits design using synthesis and optimization.
On average power dissipation and random pattern testability of cmos combinational logic networks.
Technology mapping for low power.
Global flow analysis in automatic logic design.
Technology decomposition and mapping targeting low power dissipation.
Logic synthesis and optimization benchmarks user guide version 3.0.
--TR
Graph-based algorithms for Boolean function manipulation
Is redundancy necessary to reduce delay
Estimation of average switching activity in combinational and sequential circuits
Technology decomposition and mapping targeting low power dissipation
Technology mapping for lower power
Perturb and simplify
Multi-level logic optimization by implication analysis
A symbolic method to reduce power consumption of circuits containing false paths
Multi-level network optimization for low power
Logic extraction and factorization for low power
Advanced verification techniques based on learning
Logic clause analysis for delay optimization
Boolean techniques for low power driven re-synthesis
Two-level logic minimization for low power
New algorithms for gate sizing
Reducing power dissipation after technology mapping by structural transformations
Symbolic computation of logic implications for technology-dependent low-power synthesis
Sequential logic optimization by redundancy addition and removal
Re-mapping for low power under tight timing constraints
High-level power modeling, estimation, and optimization
On average power dissipation and random pattern testability of CMOS combinational logic networks
Sequential Circuit Design Using Synthesis and Optimization
Recursive Learning
--CTR
Luca Benini , Giovanni De Micheli, Logic synthesis for low power, Logic Synthesis and Verification, Kluwer Academic Publishers, Norwell, MA, 2001
L. E. M. Brackenbury , W. Shao, Lowering power in an experimental RISC processor, Microprocessors & Microsystems, v.31 n.5, p.360-368, August, 2007 | automation;logic design;design synthesis |
348058 | Efficient optimal design space characterization methodologies. | One of the primary advantages of a high-level synthesis system is its ability to explore the design space. This paper presents several methodologies for design space exploration that compute all optimal tradeoff points for the combined problem of scheduling, clock-length determination, and module selection. We discuss how each methodology takes advantage of the structure within the design space itself as well as the structure of, and interactions among, each of the three subproblems. (CAD) | INTRODUCTION
For many years, one of the most compelling reasons for developing high-level synthesis
systems [Gajski et al. 1994] [De Micheli 1994] has been the desire to quickly
explore a wide range of designs for the same behavioral description. Given a set
of designs, two metrics are commonly used to evaluate their quality: area (ideally
total area, but often only functional unit area), and time (the schedule length, or
latency). Finding the optimal tradeoff curve between these two metrics is called
design space exploration.
Design space exploration is generally considered too difficult to solve optimally
in a reasonable amount of time, so the problem is usually limited to computing
either lower bounds [Timmer et al. 1993] or estimates [Chen and Jeng 1991] on the
optimal tradeoff curve for some set of time or resource constraints. Moreover, the
design space is usually determined by solving only the scheduling and functional
unit allocation subproblems.
The design space exploration methodology described here goes beyond traditional
design space exploration in several ways. First, all optimal tradeoff points
are computed so that the design space is completely characterized. Second, these
optimal tradeoff points represent optimal solutions to the time-constrained scheduling
(TCS) and resource-constrained scheduling (RCS) problems, rather than lower
bounds or estimates. Third, the tradeoff points are computed in a manner that
supports more realistic module libraries by incorporating clock length determination
and module selection into the methodology. Finally, these tradeoff points are
This material is based upon work supported by the National Science Foundation under Grant
No. MIP-9423953.
addresses: Stephen A. Blythe, Department of Computer Science, Rensselaer Polytechnic
Institute, Troy, NY 12180; Robert A. Walker, Department of Mathematics and Computer Science,
Kent State University, Kent, OH 44242
A. Blythe and R. A. Walker
A min
A
Area
* Latency
Fig. 1. Example design space showing Pareto points. The shaded regions show the two distinct
clusters of Pareto points that many tradeoff curves exhibit.
computed in an efficient manner through careful pruning of the search space during
the design cycle. The resulting methodology can also be extended to include
additional subproblems.
1.1 The Design Space
The process of exploring the design space can be viewed as solving either the time-constrained
scheduling (TCS) problem (minimizing the functional unit area) for a
range of time constraints, or the resource-constrained scheduling (RCS) problem
(minimizing the latency) for a range of resource constraints. Although there is a
tradeoff between latency and area, the tradeoff curve is not smooth due to the finite
combinations of the library modules available [McFarland 1987].
Consider the design space shown in Figure 1 - this curve can be described by the
set of points f(T ; f(T ))g, where f(T ) is the minimum area required for a given time
constraint T (i.e., the optimal solution to that TCS problem). To ensure that this
curve is completely characterized, one could exhaustively solve the TCS problem
optimally for every time constraint T from T min (the critical path length) to Tmax
(the time constraint corresponding to the module selection / allocation with the
minimum area). However, this brute-force approach is not very efficient.
Fortunately, the number of points needed to fully characterize the optimal trade-off
curve is much smaller. The curve can be completely characterized by the set
f(T ; f(T ))g of optimal tradeoff points (shown by black dots in Figure 1) - those
points for which there is no design with a smaller latency and the same area, and
no design with a smaller area and the same latency. Such points are called Pareto
points [De Micheli 1994] [Brayton and Spence 1984], and can be formally defined
as follows:
Therefore, the design space exploration problem can be solved more efficiently by
Efficient Optimal Design Space Characterization Methodologies ffl 3
finding only the Pareto points in the design space. Furthermore, many optimal
tradeoff curves contain two distinct clusters of such Pareto points, as shown by the
shaded regions in Figure 1: one where the latency is small and the area is large,
and another where the area is small and the latency is large.
This paper explores several approaches to find all the Pareto points in an efficient
manner. First, Section 2 describes a basic methodology that explores the latency
axis to find the Pareto points in the design space through repeatedly applying a
TCS methodology. Section 3 discusses how to extend this problem to incorporate
the clock length determination problem, and Section 4 discusses the incorporation
of module selection. In Section 5, a related approach based on solving the RCS
problem repeatedly while taking advantage of the structure of the module selection
problem is discussed, and a comparison is made with the TCS based method.
Section 6 examines the advantages and disadvantages of combining the two
approaches in a manner similar to Timmer's bounding methodology [Timmer et al.
1993]. Sections 7 and 8 describes techniques for pivoting between the TCS and
RCS-based methodologies to take advantage of the Pareto point clusters described
above. In Section 9, results for a fairly complex module library are presented,
and why this pivoting method works well for such a library is discussed. Lastly,
Section 10 gives a summary of this work and suggests some future directions that
it may take.
2. LATENCY-AXIS EXPLORATION
To find all of the Pareto points, either the TCS problem could be solved repeatedly
for various time constraints, or the RCS problem could be solved repeatedly
for various resource constraints. Our methodology repeatedly solves the TCS
problem 2 , which leads to two subproblems: (1) determining which time constraints
to explore, and (2) determining how to efficiently explore the design space at each
time constraint.
Ideally, we want to avoid exhaustively searching all time constraints in the feasible
range [T min ; Tmax ]. If the module set and clock length are specified a priori, then
the TCS problem need only be solved for those time constraints that are a multiple
of the clock length, since any other time constraint could be replaced by the smaller
of the two time constraints that it would lie between without any increase in area.
As a simple example, consider the design space exploration problem for the DIFFEQ
example [Paulin and Knight 1989], using library A from Table 1 (the "trivial"
library 1 found in [Timmer et al. 1993]) and a clock length of 100. The minimum
time constraint is 600 (the length of the critical path), the maximum time
constraint is 1300 (the latency required for a feasible schedule with 1 mult and
alu1), so the only time constraints that must be explored are those in the set
f600; 700; 800; 900; 1000; 1100; 1200; 1300g.
Given that set of time constraints, our Voyager design space exploration system
[Chaudhuri et al. 1997] efficiently characterizes the design space as follows.
The main loop (see Figure 2) scans the time constraints in the direction of in-
2 Note that, although we are solving only the TCS problem, this methodology is not limited to
solving only that problem, and could be extended to include register allocation, interconnect
allocation, control unit design, etc.
A. Blythe and R. A. Walker
Design Space Exploration:
areacur / MAXINT
compute Tmin and Tmax
compute all candidate time constraints T i in [Tmin
for each T i from Tmin to Tmax
not a feasible schedule
else
compute the lower bound lb on f(T i )
if (lb ?= areacur)
not a Pareto point /* nP-lb */
else
compute the upper bound ub on f(T i )
else
compute the LP-relaxed lower bound rlb on f(T i )
if (rlb ?= areacur)
not a Pareto point /* nP-rlb */
else if
else
calculate IP solution
areacur
else
not a Pareto point/* nP-ILP */
Fig. 2. Voyager's main design space exploration loop
creasing latency. At each time constraint, an ASAP schedule is first calculated to
determine if a feasible schedule exists for that time constraint and clock length. If
so, then it uses a heuristic to compute a lower bound on the functional unit
area; if this area is the same as or larger 3 than the previous area, then that solution
is not a Pareto point. This is the case for time constraints 900, 1000, 1100, and
1200 in
Figure
3.
However, if the lower bound is smaller than the previous area, then it is a potential
3 In the problem as specified so far, the area will never be larger. However, it may be larger
when the clock length determination and module selection problems are incorporated into the
methodology as described in Sections 3 and 4.
Efficient Optimal Design Space Characterization Methodologies ffl 5
MODULE AREA DELAY (ns) OPERATIONS
mult 1440 200 f*g
alu1 160 100 f+; \Gamma; !g
Table
1. Library A - Timmer's ``trivial'' library 11500250035004500500 600 700 800 900 1000 1100 1200 1300 1400
Area
Latency
Optimal (Pareto-Based) Curve
Optimal Solutions
Lower Bounds
Fig. 3. Results from DIFFEQ using library A
Pareto point. The methodology then computes an upper bound on the area, and
compares it to the lower bound. If the two are equal, then the point is an optimal
solution, and a Pareto point (e.g., time constraint 800 in Figure 3); if not, then
the results are still inconclusive (e.g., time constraint 700). It then uses a tighter
(but more computationally-intensive) FU lower-bounding method based on LP-
relaxation, and tries this procedure again (in this example, determining that time
constraint 700 corresponds to a Pareto point). If this method also fails, then it
solves a carefully-developed ILP formulation [Chaudhuri et al. 1994] to determine
the optimal solution, using the bounds determined earlier to reduce the search space
for that solution.
Thus our base methodology quickly determines whether or not each time constraint
corresponds to a Pareto point by carefully pruning the search space. It first
computes a small set of time constraints to explore. Increasingly tighter heuristics
are then used to try to determine if each time constraint corresponds to a
Pareto point. Only if those heuristics fail is a more computationally-intensive ILP
formulation used.
Unfortunately, assuming the module set and clock length are specified a priori
is unrealistic with complex module libraries. Accordingly, Section 3 describes how
this base methodology can be extended to include clock length determination and
Section 4 describes the incorporation of module selection.
3. ADDING CLOCK DETERMINATION
As described earlier, our base methodology explores a set of time constraints, determining
whether or not the solution to each TCS problem is a Pareto point. The
problem was simplified by assuming the clock length was known a priori, whereas
A. Blythe and R. A. Walker
recent work has shown that not only is determining the system clock length a difficult
problem [Chen and Jeng 1991; Chaiyakul et al. 1992; Narayan and Gajski
1992; Corazao et al. 1993; Jha et al. 1995; Chaudhuri et al. 1997]), but the choice
of the clock length has a significant impact on the resulting design. Therefore,
the problem of clock length determination must be folded into the design space
exploration problem.
3.1 Prior Work
As described in [Chaudhuri et al. 1997], the clock determination problem is usually
ignored in favor of ad hoc decisions or estimates. For example, several early synthesis
systems used the delay of the slowest functional unit as the estimated clock
length, a choice which favored the use of chaining and disallowed multi-cycling.
A heuristic method for finding the clock length was given in [Narayan and Gajski
1992], but the result may not be optimal.
To guarantee that the optimal clock length is chosen, 4 the scheduling problem
could be solved repeatedly for every possible clock length - a very computationally-intensive
task. Fortunately, such an exhaustive search is not necessary, as the set of
candidate clock lengths to be scheduled can be reduced. In [Corazao et al. 1993], one
method for reducing that set is given. A tighter method was introduced in [Chen
and Jeng 1991], and later proven correct in [Jha et al. 1995] and [Chaudhuri et al.
1997] - this method computes a small set of candidate clock lengths (one of which
must be the optimal clock length) by taking the ceiling of the integral divisors of
each of the functional unit delays.
3.2 Pruning the Candidate Clock Lengths
Even these integral-divisor methods can lead to a set of candidate clock lengths so
large that it becomes too time-consuming to solve the TCS problem for each clock
length at each time constraint. Fortunately, the set of candidate clock lengths can
be reduced even further, as described below.
Definition 1. For a given clock length c, the slack s k of a module of type k with
execution delay d k is given by
c
Theorem 1. Given a clock length c, if there exists a clock length c such that
(c) for all module types k in the current module selection, then c can be
replaced by c without lengthening the schedule.
(c) for all modules k, the same quality will hold for
operations in a schedule using these modules. Thus all operations in the schedule
using c could be scheduled at least as soon, if not sooner, in a schedule using c
because all operations will be capable of executing faster (or equally as fast) in
the schedule using c . Thus, changing the clock length to c can only improve the
schedule. 2
4 Actually, this is only the data path component of the system clock length; the final clock length
includes controller and interconnect delays as well.
Efficient Optimal Design Space Characterization Methodologies ffl 7
MODULE AREA DELAY (ns) OPERATIONS
alu1 100 48 f+; \Gamma; !g
Table
2. Library B - Narayan's library
Clock Length slack(*) slack(+) replaced by
28 5 8 24/55
Table
3. Slack values found in library B
To demonstrate the use of this theorem, consider library B, shown in Table 2 (the
VDP100 library from [Narayan and Gajski 1992], augmented with areas similar to
those of library A). Assuming a technology limit of 17ns on the shortest clock length,
integral divisor methods give the set
of candidate clock lengths, with the corresponding slack values shown in Table 3.
Consider the clock length of 33ns, found as d163=5e = 33. When a multiplier is
scheduled using this clock length, there will be a slack of 2ns. There are several
clock lengths whose slack for the multiplier is smaller, but the slack corresponding
to the alu1 is always larger. However, a clock length of 55ns has the same slack
as the multiplier, and less slack for the alu1. Therefore, Theorem 1 says that any
schedule that uses a clock length of 33ns can be shortened by using a clock length
of 55ns (without increasing the number of functional units).
When Theorem 1 is applied to the full set of candidate clock lengths, the set is
reduced to CK 24g. Note that when two sets of slack values are
equivalent, the shorter clock length is dropped since it would tend to result in a
larger controller.
3.3 Exploring the Candidate Clock Lengths
Once the pruned set CK 0 of candidate clock lengths has been computed, the
integral multiples of each of those clock lengths give the time constraints to explore.
Then, for each such time constraint and candidate clock length, the methodology
outlined in Figure 2 can be applied. The efficiency of the search at each time
clocks time constr. design points nP-lb nP-rlb nP-ILP P-lb P
-rlb P-ILP
Table
4. Statistics from solving DIFFEQ using library B
A. Blythe and R. A. Walker250350450550650400 500 600 700 800 900 1000 1100
Area
Latency
Optimal (Pareto-Based) Curve
Optimal Solutions
Lower Bounds
Fig. 4. Results from DIFFEQ using library B
Design Space Exploration w/ Clock Determination:
areacur / MAXINT
compute pruned set CK 0 of candidate clock lengths
compute Tmin and Tmax
for each c j in CK 0
compute all candidate time constraints T i in [Tmin
for each T i from Tmin to Tmax
using each c j in CK 0 inducing T i
determine if (T i ; f(T i )) is a Pareto point (see Figure 2)
Fig. 5. Voyager's design space exploration loop with clock determination
constraint can be improved by observing that each time constraint was derived
as an integral multiple of one or more clock lengths, so only those inducing clock
lengths need be explored at that time constraint. The resulting methodology is
outlined in Figure 5.
Using library B and the DIFFEQ example, this methodology generates the design
space shown in Figure 4. From the pruned set CK of candidate
clock lengths, were generated, and 50 time constraint / clock
length pairs were explored (note that there was only a single time constraint with
more than one candidate clock length). Two corresponded to infeasible schedules,
while the other 48 had to be examined to determine if they were Pareto points.
As
Table
4 shows (the headings of the last six columns correspond to labels in
Figure
2), the vast majority of the solutions were determined to be either Pareto
or non-Pareto points using the bounding heuristics - only two were solved using
the tighter LP-relaxation lower bounding method and no solutions required an ILP
Note also that in several cases (time constraints in the range 420-600), the
Efficient Optimal Design Space Characterization Methodologies ffl 9
MODULE AREA DELAY (ns) OPERATIONS
mult 1440 200 f*g
alu1 160 100 f+; \Gamma; !g
add1 150 100 f+g
alu2 90 200 f+; \Gamma; !g
sub2 85 200 f-g
add1 85 200 f+g
Table
5. Library C - Timmer's library 2
lower bound differed from the optimal solution, so methods based solely on lower-bounding
would incorrectly characterize the design space.
Finally, Figure 4 also demonstrates the importance of systematically examining
all relevant clock lengths in the design space. At a time constraint of 652, the
inducing clock length of 163ns leads to a solution with an area of 500, whereas the
previous time constraint had a lower area of 400. Although the point (652, 500) is
optimal with respect to its time constraint and a fixed clock length of 163 ns, it is
not a Pareto point, and is thus rejected by the line labeled /* nP-lb */ in Figure 2.
4. ADDING MODULE SELECTION
While adding clock length determination to the base methodology is an important
step toward supporting more complex libraries, the methodology must also be extended
to cover libraries that offer a number of possible module sets. Again, we
would prefer to avoid an exhaustive search of all possible module sets, yet we must
ensure that we do not miss any combination of a time constraint, clock length, and
module set that corresponds to a Pareto point.
4.1 Prior Work
Over the years, a variety of methods have been employed to determine the appropriate
module set. One method, described in [Jain et al. 1988], generates a number
of module sets, and then selects the best one. Another method, presented in [Tim-
mer et al. 1993], computes an initial module set through a MILP formulation, and
determines its validity by scheduling; if no viable schedule is found, then the set
(and its allocation) are updated, and the scheduling process is repeated. 5 As with
some of the previous work on clock length determination, using such techniques
to determine a single module set before (and independently of) scheduling cannot
guarantee a globally optimal solution. Instead of trying to find a single module
set, the method found in [Chen and Jeng 1991] exhaustively explores all possible
module sets. Since this method also exhaustively explores all integral divisor based
clock lengths, its computational complexity is too large for optimal scheduling, so
only estimates are computed.
4.2 Exploring Different Module Sets
Fortunately, such an exhaustive search is not necessary. Many of the possible
module sets can be eliminated since they are incapable of implementing all the
5 This method also incorporates the type mapping problem into the MILP formulation - something
our methodology does not yet handle. See Section 9.
A. Blythe and R. A. Walker1500250035004500600 800 1000 1200 1400
Area
Latency
Optimal (Pareto-Based) Curve
Optimal Solutions
Lower Bounds
Fig. 6. Results from DIFFEQ using library C
MODULE AREA DELAY (ns) OPERATIONS
mul
add 50 50 f+g
sub
Table
6. Library D - an artificial complex library
operation types found in the data flow graph. For example, in the case of the
DIFFEQ, the module set must be capable of performing the operations f+; \Gamma; ; !g;
any module sets that do not can be eliminated.
Moreover, the number of module sets that must be explored at each time constraint
can be reduced (as was the number of candidate clock lengths) by observing
that each time constraint was derived as an integral multiple of a clock length derived
from one or more specific modules. Therefore, only those module sets that
contain at least one of those modules must be explored at that time constraint.
Using library C, shown in Table 5 (library 2 from [Timmer et al. 1993]), and the
DIFFEQ example, the methodology described above generates the design space
shown in Figure 6. There are 32 possible module sets, but only 1 pruned candidate
clock length (100ns) and 9 time constraints, resulting in 288 TCS problems to solve.
resulted in infeasible schedules (i.e., no solution was possible), and as before, the
vast majority of the solutions were determined to be either Pareto or non-Pareto
points using the bounding heuristics.
library clocks time constr. design points nP-lb nP-rlb nP-ILP P-lb P-rlb P-ILP
Table
7. Statistics from solving DIFFEQ using libraries C and D
Efficient Optimal Design Space Characterization Methodologies ffl 11100200300400200 400 600 800 1000 1200 1400
Area
Latency
Optimal (Pareto-Based) Curve
Optimal Solutions
Lower Bounds
Fig. 7. Results from DIFFEQ using library D
As another example, consider library D, shown in Table 6 (an artificial library
slightly less complex than library C, but with more realistic module delays). Using
that library, and the DIFFEQ example, the methodology described above generates
the design space shown in Figure 7. Here there were 16 possible module sets, 9
integral-divisor candidate clock lengths, and 131 time constraints - almost 19,000
combinations. Even after pruning the candidate clock lengths, there were 6 pruned
candidate clock lengths, and 93 time constraints - almost 9,000 combinations.
However, the methodology had to solve only 1522 TCS problems (an average of
1.35 clock lengths and 11.27 module selections at each time constraint). 183 of
those were infeasible, and again, the vast majority of the solutions were determined
to be either Pareto or non-Pareto points using the bounding heuristics. Moreover,
this entire procedure took only 1.5 hours of wall-clock time. Without such a careful
pruning of the search space, this problem could not have been solved optimally in
a reasonable amount of time.
Furthermore, with these 4 module delays, there are many resulting designs that
lie above the optimal tradeoff curve. Although these designs are optimal solutions
for a particular clock length and module set, they are not Pareto points, so it
is very important that the methodology correctly explores the design space. For
example, [Timmer et al. 1993] presents a method that begins at time constraint Tmax
and alternately performs time and area lower-bounding to find a stair-step tradeoff
curve. Even if that methodology is enhanced to alternate between optimally solving
the resource-constrained and time-constrained scheduling problems, it would only
find the Pareto-based tradeoff curve in the absence of the combined module selection
and clock length determination problem. If this combined problem was included,
the enhanced methodology would fail to find the Pareto-based curve if one of the
points found by time-constrained scheduling is a suboptimal point that lies above
the optimal Pareto-based curve. Such a point (which would have a non-minimal
area) would then be used by resource-constrained scheduling to find the minimal
latency with this (non-minimal) area, thus compounding the problem and giving
an erroneous design curve that actually lies above the optimal area curve based on
A. Blythe and R. A. Walker
Module Area Delay Operations
mul
add 50 50 f+g
sub
Table
8. An artificial complex library
the Pareto points.
5. AREA-AXIS EXPLORATION
Viewing the previous approach as a latency-axis exploration methodology, an alternative
approach is based on the area axis: the Pareto points can be found by
determining a set of area constraints to explore, and then optimally solving the
RCS problem at each area constraint. Again, it is necessary to reduce the number
of constraints to explore, as searching the entire integral range along the area axis
would be prohibitively expensive. Given such a reduced set of area constraints, the
algorithm outlined in Figure 2 can be modified to explore the area axis instead -
starting with the point that has the smallest possible area (and the largest latency)
and continuing until it reaches the point with the largest possible area (and minimal
latency), solving the RCS problem to minimize the schedule latency at each
area constraint. The various solutions can then be determined to be either Pareto
or non-Pareto points using heuristics and exact techniques together in a manner
similar to that described in Section 2.
In much the same way that any one time constraint can be induced by more than
one clock length, each area constraint can correspond to more than one module set
/ resource allocation. Enumerating all possible allocations whose resulting area is
within Amax for each module set gives an initial set of candidate area / resource
constraints that could be used in the area-axis methodology. The size of this initial
set can then be reduced by noting that it may contain overly loose resource constraints
- for example, a resource constraint of 3 adders would be too loose for a
behavioral description with only one addition operation. In general, we can reduce
the set of candidate resource constraints by upper bounding the number of independent
paths in the DFG that could require resources of type T and could possibly
be executed in parallel. The resulting upper bound would then be the maximal
number of type T resources needed in any allocation for any schedule using any
clock length. Although greatly dependent on the module library and DFG in use,
applying even such simple heuristics can reduce the number of area constraints to
explore by 80% - 85%.
Given this basic area-axis methodology, the clock length determination and module
selection problems can be incorporated much as they were in the basic latency-
axis methodology. However, in the area-axis methodology, it is easier to solve
module selection than clock length determination at each area constraint, as the
candidate module selections have the more pronounced effect.
Efficient Optimal Design Space Characterization Methodologies ffl 13
Latency Axis Area Axis Timmer Based
DIFF 14:07 2:36 5:13
94 72
AR
48:08 8:50 12:40
198
Table
9. Results from axis-based and neighborhood-based Timmer-like exploration using the
library from Table 8
5.1 Latency-Axis vs. Area-Axis Exploration
Due to effect that the inducing clock lengths have on the other problems, it is easier
to solve the clock length determination problem than it is to solve the module
selection problem at each time constraint in the latency-axis methodology. This is
reflected by the fact that there are often more candidate module selections than candidate
clock lengths at each time constraint. The opposite is true of the area-axis
methods - each area constraint is derived from a module selection and allocation,
without concern for the clock length determination problem. This frequently results
in a single module selection with many clock length candidates at each point
considered along the area axis.
Results for both the latency-axis and area-axis methodologies are given in the
middle two columns of Table 9, which show results for three different benchmarks
(DIFFEQ, AR-lattice, and Elliptic Wave Filter). In each cell of the table, the
execution time 6 is given (as minutes:seconds), along with the total number of
points explored (i.e., the number of TCS or RCS problems solved).
Note that neither approach is universally better than the other. Most of the time,
it was faster to use area-axis exploration, but for the EWF example, several of the
RCS problems were quite time-consuming to solve optimally. However, expressing
those problems as TCS problems tended to be significantly faster, resulting in faster
latency-axis exploration. Furthermore, the simplicity of the module library used
tends to lend itself to giving fewer area constraints, which is a significant factor in
the speed of either method. Thus it is unclear how to determine, a priori, which
axis to choose for exploration.
6. A TIMMER-LIKE EXPLORATION
While both of the axis-based methods described in Section 5 explore the design
space in a reasonably effective manner, neither is universally effective. Furthermore,
each method still has a fairly large number of TCS/RCS problems to solve. To
increase the efficiency further, one approach would be to combine the two methods,
using a search methodology similar to the one employed in [Timmer et al. 1993].
As described in that paper, Timmer's methodology solves only the lower bounding
problem, but it could easily be adapted to solve the scheduling problem in-
stead. When adapted in this manner, Timmer's methodology essentially alternates
between solving TCS and RCS problems, as shown in Figure 8. First, given a clock
length and a minimal area (resource) constraint, the RCS problem would be solved,
6 The executions times are based on a Sun SPARC-20 running the Solaris Operating System.
A. Blythe and R. A. Walker
Timmer Curve
TCS
RCS
TCS
Timmer Pareto Candidates
True Optimal Curve
True Pareto Points
RCS
A
Fig. 8. The pitfall of Timmer's method when considering clock length and module set determi-
nation
minimizing the latency for that resource constraint. The resulting latency would
then be used as a time constraint and the corresponding TCS problem solved, minimizing
the number of resources. Then another RCS problem would be solved using
that minimized number of resources, etc. Since every RCS problem solved by this
methodology finds a Pareto point, the number of TCS/RCS problems to be solved
is reduced considerably over the axis-based methods.
As presented, this Timmer-like methodology does not consider the clock length
determination problem, but as has been shown in [Chaudhuri et al. 1995] [Chen
and Jeng 1991], the clock length has a significant impact on resulting designs. In
fact, failure to incorporate clock length determination can result in overlooking
Pareto points in the complete optimal design space. Because Timmer's methodology
assumes a single clock length, the resulting Pareto points are optimal relative
only to that one clock length and may not fully characterize the design space since
time constraints induced by other clock lengths frequently result in smaller areas
(TCS solutions). This situation is depicted in Figure 8, where Pareto points C and
D were not found with Timmer's methodology, since they correspond to different
clock lengths than the one being used, and Pareto point A (also corresponding to
a different clock length) was missed in favor of the false Pareto point B.
However, the clock length determination problem can be incorporated into Tim-
mer's method by considering a neighborhood of time constraints around each Timmer
Pareto candidate as follows. For each candidate clock length, the induced
time constraint closest to, without exceeding, that Pareto point's time constraint
is found. Then, instead of solving a single TCS problem at a single time con-
straint, the minimum area resulting from solving the TCS problem at each of these
new time constraints is found, and used as the area constraint for the next RCS
problem. Unfortunately, the information from previous schedules can no longer be
used to prune the search space as described in Section 2, which puts this method
at distinct disadvantage over the axis-based methods. This effect can be seen in
the last column of Table 9, where the execution times for our neighborhood-based
Timmer-like methodology can exceed those of the axis-based methodologies, even
though there is a decrease in the number of design points explored.
Efficient Optimal Design Space Characterization Methodologies ffl 15
Design Space Exploration using Simple Pivoting:
input percentage of time constraints to explore, perc
generate candidate time constraints in range [Tmin
locate T such that j[Tmin ; T
explore using the latency axis methodology
using A corresponding to the point (T ; A )
generate candidate area constraints in range [Amin ; A
explore [Amin ; A ] using the area axis methodology
Fig. 9. Voyager's Simple Pivoting Methodology
explored
100 50
14:07 3:58 2:24 2:01 1:38 1:49 2:46
94 52 42 45 44 44 72
AR 48:08 11:04 7:45 5:57 4:41 9:24 8:50
Table
10. Results from simple pivoting
7. PIVOTING BETWEEN LATENCY-AXIS AND AREA-AXIS EXPLORATION
Another approach to combining latency-axis and area-axis exploration is to consider
the structure of the tradeoff curve. As shown in Figure 1, a large number
of the Pareto points are clustered into two regions: one where the latency is small
and the area is large, and another where the area is small and the latency is large.
This phenomenon is also illustrated in Figure 10. Here, the latency-axis methodology
(exploring the latency axis in the direction of increasing latency) would find
many Pareto points fairly quickly, but would then waste a considerable amount
of time exploring time constraints that do not correspond to Pareto points until
the high-latency cluster of Pareto points is reached. Using the area-axis methodology
(exploring the area axis in the direction of increasing area) has a similar
shortcoming.
However, this shortcoming can be overcome by pivoting between the two axis-
based methods - using the latency-axis methodology to explore the high-area / low-latency
cluster, and using the area-axis methodology to explore the high-latency /
low-area cluster. This process is outlined in Figure 9. When exploring the latency
axis in the direction of decreasing latency, the most obvious method of pivoting
is to simply switch from latency-axis exploration to area-axis exploration after exploring
a certain percentage (perc) of the latency axis. Note that after making this
switch, the area-axis methodology must still explore the area axis in the direction of
increasing area (so that information from previous schedules can be used to prune
the search space as described in Section 2), but now it can stop when it reaches the
last Pareto point found by the latency-axis methodology.
The results of performing this pivoting process for various percentages of the
A. Blythe and R. A. Walker100200300400500 1000 1500 2000 2500 3000 3500 4000 4500
Area
Latency
AR Optimal (Pareto-Based) Curve
EWF Optimal (Pareto-Based) Curve
Fig. 10. The EWF and AR optimal Pareto-base curves when using the library from Table 8
latency axis are presented in Table 10. Note that the 0% column corresponds to
an immediate pivot to area-axis exploration, and the 100% column corresponds
to using solely latency-axis exploration. Not surprisingly, for the tradeoff curves
depicted in Figure 10, the percentages that result in the fastest execution times
are fairly low (10%-20%), since most of the low-latency cluster of Pareto points
are within the first 20% of the latency axis. Unfortunately, however, there is no
consistent percentage that will always correspond to the best pivot point for every
tradeoff curve, regardless of whether the execution time 7 or the number of points
being explored is the quantity being minimized. Thus, a better method for deciding
where to pivot must be found.
8. DYNAMIC PIVOTING
Since the best pivot point cannot be determined a priori, it must be determined
dynamically during the exploration process. Since the tradeoff curve often exhibits
two clusters of Pareto points as described earlier, one approach would be to determine
when a cluster is being left, and pivot while exploring the next few points
that are not members of either cluster. When exploring the latency axis, this pivot
would occur when the curve begins to "flatten out" into a roughly horizontal line.
One simple method of implementing this dynamic pivot is outlined in Figure 11,
in which a window W of constant size (W size ) is kept. This window contains the
last n design points explored, many of which were pruned as non-Pareto points.
If the area of the first element in the window significantly larger than
the area corresponding to the current time constraint (An ) at any point during
latency-axis exploration, the current point is selected as the pivot point. In other
words, if a Pareto point has not been found recently, the curve is flattening out and
the pivot from latency-axis exploration to area-axis exploration is made.
7 Once again, the EWF example contains several points that are computationally more expensive
when solved as RCS problems - thus the dramatic execution time increase between 20% and 10%
despite the decrease in points explored.
Efficient Optimal Design Space Characterization Methodologies ffl 17
Design Space Exploration using Dynamic Window Pivoting:
input tolerance percentage tol of time constraints in window
generate initial window
explore window W using latency axis methodology
using A1 and An from the points
while
remove T1 from W
append next T i to W
calculate An using TCS method
generate final window Wa as the area constraints [Amin ; An
explore Wa using area axis methodology
Fig. 11. Voyager's Dynamic Pivoting Methodology
time constraints in window
DIFF 2:36 1:48 2:23 4:40 5:20 14:07
72 44
AR 8:50 10:08 6:20 7:34 8:21 48:08
Table
11. Results from dynamic window-based pivoting
The results of applying this dynamic window-based pivoting are given in Table 11.
To determine when a "significant" change in area was reached, the size of the current
window was compared to the change in area over that window. If the percentage
change in area was smaller than the size of the window as a percentage of the
total number of time constraints, we pivoted to using the area axis; otherwise we
continued using the latency axis. Unfortunately, there was no consistent window
size that yielded the best result for every case. In general, a window size of 10%-15%
of the time constraints generally seemed to give good results.
Looking at Table 11, the first example shown (DIFFEQ) is small enough that
5% of the time constraints is statistically insignificant, leading to results that are
dominated by the area-axis exploration. However, the EWF results give a strong
argument for using dynamic pivoting - here a bad a priori choice of using only
latency-axis exploration or area-axis exploration (as shown in Table 10) could lead
to a significantly larger execution time than 15% dynamic pivoting.
9. FURTHER RESULTS
In all of our results so far, we have used the library presented in Table 8. That
library has a number of different delays, which complicates any design space exploration
methodology that considers clock length determination. However, it has
only two alternatives for each operation type, leading to a fairly small number of
A. Blythe and R. A. Walker
Module Area Delay Operations
mul1 500 200 f*g
mul3 800 100 f*g
sub1 100 160 f-,!g
sub2 200 110 f-,!g
add1 90 150 f+g
add3 380 50 f+g
Table
12. A module selection intensive library
Latency Area Timmer Pivot (15%)
44:22 4:37 32:58 6:15
AR 369:32 213:17 313:27 192:17
Table
13. Results when using a library with many module selections1000200030004000
500 1000 1500 2000 2500 3000 3500 4000 4500
Area
Latency
AR Optimal (Pareto-Based) Curve
EWF Optimal (Pareto-Based) Curve
Fig. 12. The EWF and AR optimal Pareto-base curves when using the library from Table 12
Efficient Optimal Design Space Characterization Methodologies ffl 19
module selection candidates.
Now consider Table 12, which is the opposite: it has fewer unique functional unit
delays, and several of those delays are multiples of each other. Both of these factors
result in fewer resulting candidate clock lengths (for example, several functional
units have 50 as a candidate clock length). However, this library has a much larger
number of module selection candidates.
Results using this library are presented in Table 13. Compared to results using
the previous library when the latency-axis methodology is used, there is significantly
more time being spent exploring the latency axis, since the module selection
problem is now much more difficult and latency-axis exploration gets most of its
time savings due to the structure of the clock length determination problem. How-
ever, for area-axis exploration the results are now generally faster, reflecting the
savings due to considering the structure of the module selection problem. Again,
as with the first library, EWF gives several RCS problems that are time consuming
to solve optimally (while the corresponding TCS problems are not as time consum-
ing), thus dramatically increasing overall run time for the area axis. Note that in
all cases the number of area constraints to solve is much higher - thus the savings
in execution time must result from the fact that there is more structure to each
constraint along the area axis for this library.
As with the prior library, Timmer-like exploration (even our neighborhood-based
Timmer-like exploration) once again fails to produce faster run times for this li-
brary, although in this case the primary contributing factor is not only clock length
determination but the number of possible module selection candidates at each of
the generated time constraints. For AR and EWF, the pivoting method once again
gave the best execution times 8 , but this time it also explored fewer points than the
Timmer-like method!
Finally, note that the resulting design spaces for these two benchmarks are also
much more complex for this library, as can be seen in Figure 12. The added
complexity of these plots is directly attributable to the complexity of the module
selection problem - many more area constraints exist, leading to more Pareto points
being derived from the corresponding resource constraints.
10. CONCLUSIONS AND FUTURE WORK
This paper has examined the process of design space exploration, reducing that
process to one of characterizing the optimal latency-area tradeoff curve by finding
all the Pareto points on that curve. For the combined problem of scheduling, clock
length determination, and module selection, we have presented several exploration
methodologies: dedicated latency-axis or area-axis exploration, a Timmer-like exploration
method, and two methods (one static, one dynamic) for pivoting between
the two axis-based methods. Each of these methodologies takes advantage of the
structure found along both the latency and are axes by carefully pruning a large
number of sub-optimal solutions at each level of the design cycle, making it possible
to use optimal scheduling techniques rather than bounds or estimates. Furthermore,
8 The DIFFEQ benchmark once again gives skewed results as its size does not allow a statistically
significant number of Pareto points to be incorporated within the 15% design window, thus not
allowing the pivoting method to take full advantage of the structure in the resulting design space.
S. A. Blythe and R. A. Walker
we discussed how the tradeoff curve is dominated by two clusters of Pareto points,
and how that structure, along with the structure of the combined problem, can be
used to more efficiently find the Pareto points.
Tests using various benchmarks and different module libraries have shown the
importance of considering the clock length determination and module selection
problem in conjunction with the scheduling problem. When these subproblems are
not considered in conjunction like this (often such subproblems are resolved prior
to and independently of scheduling), we have shown that results do not accurately
reflect the optimal tradeoff curve. In many cases, methods that do not consider
this combined problem entirely miss globally optimal points.
Although the methodologies presented solve the design space exploration problem
optimally, they could also be used to generate a preliminary characterization by
replacing the optimal scheduler with a heuristic scheduler or lower bound estimate.
9 The reductions in the number of constraints to explore would be similar to that
found in the optimal case, but the amount if execution time would be lower at the
expense of optimality.
At present, although these methodologies allow us to handle more realistic module
libraries than most previous methodologies since they consider clock length
determination and module selection, they do not consider the type mapping prob-
lem. That is, they assume that all operations of a given type are mapped to a
single functional unit type (found by module selection). To more more fully take
advantage of module libraries, and thus to more completely characterize the optimal
tradeoff curve, the methodologies (in particular the scheduling portions) must
be enhanced to handle the complete type-mapping problem. When finding optimal
solutions to this type mapping problem, it will crucial to find tight heuristic
bounding techniques for the type mapping problem so that the axis-based methods
can maintain their efficiency through pruning methods. Furthermore, a more realistic
model of the resulting design must be developed so that the methodology also
incorporates registers, interconnect issues, controller effects, etc.
--R
Towards a Practical Methodology for Completely Characterizing the Optimal Design Space.
Sensitivity and Optimization.
Timing Models for High Level Synthesis.
An Exact Methodology for Scheduling in a 3D Design Space.
A Solution Methodology for 9 Note that the bounding methodology described here would more fully characterize the design space than the one described in
Analyzing and Exploiting the Structure of the Constraints in the ILP Approach to the Scheduling Problem.
Optimal Module Set and Clock Cycle Selection for DSP Synthesis.
Instruction Set Mapping for Performance Optimization
Synthesis and Optimization of Digital Circuits.
Specification and Design of Embedded Systems.
Module Selection for Pipeline Synthesis.
Reclocking for High Level Synthesis.
Reevaluating the Design Space for Register Transfer Hardware Synthesis.
System Clock Estimation based on Clock Slack Min- imization
Force Directed Scheduling for the Behavioral Synthesis of ASICs.
Fast System-Level Area-Delay Curve Prediction
--TR
System clock estimation based on clock slack minimization
Timing models for high-level synthesis
Specification and design of embedded systems
Analyzing and exploiting the structure of the constraints in the ILP approach to the scheduling problem
A comprehensive estimation technique for high-level synthesis
Reclocking for high-level synthesis
Computing lower bounds on functional units before scheduling
Instruction set mapping for performance optimization
Module selection for pipelined synthesis
Synthesis and Optimization of Digital Circuits
Toward a Practical Methodology for Completely Characterizing the Optimal Design Space
--CTR
Zoran Salcic , George Coghill , Bruce Maunder, A genetic algorithm high-level optimizer for complex datapath and data-flow digital systems, Applied Soft Computing, v.7 n.3, p.979-994, June, 2007
Hyunuk Jung , Kangnyoung Lee , Soonhoi Ha, Efficient hardware controller synthesis for synchronous dataflow graph in system level design, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.10 n.4, p.423-428, August 2002
Yannick Le Moullec , Jean-Philippe Diguet , Thierry Gourdeaux , Jean-Luc Philippe, Design-Trotter: System-level dynamic estimation task a first step towards platform architecture selection, Journal of Embedded Computing, v.1 n.4, p.565-586, December 2005
Matthias Gries, Methods for evaluating and covering the design space during early design development, Integration, the VLSI Journal, v.38 n.2, p.131-183, December 2004 | module selection;design space exploration;bounding;efficient searching;clock-length determination;scheduling;high-level synthesis |
348304 | Optimizing computations for effective block-processing. | Block-processing can decrease the time and power required to perform any given computation by simultaneously processing multiple samples of input data. The effectiveness of block-processing can be severely limited, however, if the delays in the dataflow graph of the computation are placed suboptimally. In this paper we investigate the application of retiming for improving the effectiveness of block-processing in computations. In particular, we consider the k-delay problem: Given a computation dataflow graph and a positive integer k, we wish to compute a retimed computation graph in which the original delays have been relocated so that k data samples can be processed simultaneously and fully regularly. We give an exact integer linear programming formulation for the k-delay problem. We also describe an algorithm that solves the k-delay problem fast in practice by relying on a set of necessary conditions to prune the search space. Experimental results with synthetic and random benchmarks demonstrate the performance improvements achievable by block-processing and the efficiency of our algorithm. | Introduction
In many application domains, computations are defined on semi-infinite or very long streams
of data. The rate of the incoming data is dictated by the nature of the application and often
cannot be satisfied by a straightforward implementation of the specification. Although the
speed of hardware components has been increasing steadily, the throughput requirements
of new applications have been increasing at an even faster pace. Recent studies show that
while computational requirements per sample of state-of-the-art communication have been
A preliminary version of this work was presented at the 33rd ACM/IEEE Design Automation Conference,
June 1996.
(a) (b)
Figure
1: Improving the effectiveness of block-processing by retiming. The block-processing
factor of the original computation dataflow graph in Part (a) is 1. The block-processing
factor of the retimed graph in Part (b) is 3.
doubling every year, the processing power of hardware is doubling only every three years
[5]. Furthermore, new applications requirements such as low power dissipation impose
additional design constraints which often further add to the gap between the speed of
hardware primitives and the rate of incoming data.
In order to meet the increasing computational demands of today's communication ap-
plications, it is required to compute simultaneously on multiple samples of the incoming
data stream. This approach, known as block-processing or vectorization, is widely used
to satisfy throughput requirements through the use of parallelism and pipelining. Block-
processing enhances both regularity and locality in computations, thus greatly facilitating
their efficient implementation on many hardware platforms [6, 9]. Enhanced regularity
reduces the effort in software switching and address calculation, and improved locality improves
the effectiveness of code-size reduction methods [13]. Moreover, block-processing
enables the efficient utilization of pipelines and efficient implementations of vector-based
algorithms such as FFT-based filtering and error-correction codes [2]. In general, block-
processing is beneficial in all cases where the net cost of processing n samples individually
is higher than the net cost of processing n samples simultaneously. Typical cost measures
include processing time, memory requirements, and energy dissipation per sample.
There are several ways to increase the block-processing factor of a computation, that
is, the number of data samples that can be processed simultaneously. For example, one can
unfold the basic iteration of a computation and schedule computational blocks from different
iterations to execute successively. However, this technique may not uniformly increase the
block-processing factor for all computational blocks.
Another transformation that can be used for increasing the block-processing factor is
retiming. Contrary to other architectural transformation techniques that have targeted
high-level synthesis [8, 17], retiming has been used traditionally for clock period minimization
[7, 10, 12] and for logic synthesis [14, 15].
Figure
1 illustrates the use of retiming for improving block-processing. The computation
dataflow graph (CDFG) in this figure has three computation blocks A, B, and C, and three
delays. An input stream is coming into block A, and an output stream is generated by A.
Assuming that the computation is implemented by a uniprocessor system, the expression
above each block gives the initiation time x and the computation time y per
block input. The initiation time includes context-switching overhead for fetching data and
instructions from the background memory and the cost for reconfiguring pipelines. A single
iteration of the computation in Figure 1(a) completes in (7+5)+(7+6)+(6+3)=34 cycles by
executing the blocks in the order A . For three iterations, the computational blocks
can be executed in the order A . In this case, a new input
is consumed every 34 cycles, and the entire computation needs 3 \Theta cycles. On
the other hand, the functionally equivalent CDFG in Figure 1(b) is obtained by retiming
the original CDFG and can complete all three iterations in a single "block iteration" that
requires only (7 cycles. By grouping
all three delays on one edge, the computations of the three iterations can be executed in
the order A 1 thus amortizing the initiation time of each block
over three inputs.
Recently, retiming has been studied in the context of optimum vectorization for a class
of DSP programs [16, 18]. Specifically, a technique for linear vectorization of DSP programs
using retiming has been presented in [18]. This technique involves the redistribution
of delays in the CDFG representation of a DSP program in a way that maximizes the concentration
of delays on the edges. However, fully regular vectorization cannot be achieved
using the linear vectorization approach in that paper. Moreover, the retiming problem
for computing linear vectorizations is formulated as a non-linear program which can be
computationally very expensive to solve.
In this paper, we consider the problem of retiming computation dataflow graphs to
achieve any given block-processing factor k. We call this the k-delay problem. We first
present an integer linear programming (ILP) formulation of the k-delay problem. We then
formulate a set of necessary conditions which we use to develop an efficient branch-and-
bound algorithm for the k-delay problem. Given a CDFG and a positive integer k, our
algorithm computes a retimed CDFG that achieves a block-processing factor of k or determines
that such a retiming does not exist. An important feature of our approach is that
all blocks in the retimed CDFG achieve the same block-processing factor k and the same
execution order across iterations. As a result, our retimed CDFGs can operate faster and
are less expensive to implement than generic block-processed CDFGs. We provide extensive
experimental results which demonstrate the effectiveness of our optimization and the
efficiency of our algorithms.
The remainder of this paper is organized as follows. In Section 2 we describe the
representation of computations as dataflow graphs and give background material on block-
processing and retiming. We also give a precise mathematical formulation of the k-delay
problem. In Section 3, we present an integer linear programming formulation of the k-delay
problem. In Section 4, we describe a set of effective necessary conditions. Using these
necessary conditions, we develop a branch-and-bound algorithm in Section 5 for solving
the k-delay problem which is efficient in practice. We present our experimental results in
Section 6 and conclude with directions for future work in Section 7.
Preliminaries
In this section, we first describe the dataflow graph representation of computations. We
subsequently provide some background on block-processing and give the conditions that
must be satisfied for effective block-processing. We also provide background material on
retiming and give a mathematical formulation of the k-delay problem.
2.1 Graph representation
The CDFG of a computation structure is an edge-weighted directed graph
The nodes v 2 V model the computation blocks (subroutines, arithmetic or boolean oper-
ators), and the directed edges e 2 E model the interconnection (data and control depen-
dencies) between the computation blocks. Each edge e 2 E is associated with a weight
w(e) that denotes the number of delays or registers associated with that interconnection.
Figure
3(a) gives the graph representation of a sample CDFG.
A delay (state) in behavioral synthesis corresponds to an iteration boundary in software
compilation and a register in gate-level description. All results in this paper can be translated
from one domain to the other two in a straightforward manner. Translation of results
from behavioral to logic synthesis involves only semantic interpretation.
2.2 Block-Processing
Block-processing strives to maximize the throughput of a computation by simultaneously
processing multiple samples of the incoming data. The maximum number of samples that
can be processed simultaneously or immediately after each other by a block v is called the
block-processing factor k v of that block. A block-processing is linear if all blocks have the
A
(a) (b)
A
Figure
2: Types of linear block-processing. (a) Regular and linear block-processing with a
factor 2. (b) Irregular but linear block-processing with a factor 2.
same block-processing factor k. Given a linear block-processing with factor k, the k \Delta jV j
computational block evaluations that generate k iterations of the computation constitute
a block iteration. A linear block-processing with factor k is regular if the k data samples
processed simultaneously by every computational block are accessed during the same
block iteration. The retimed CDFG in Figure 2(a), for example, can be block-processed
linearly and regularly with a block-processing factor of 2. The computation blocks for this
CDFG execute in the order A 1 two input samples are consumed
in each block-iteration. Regular block-processing leads to more efficient implementations
of CDFGs, because it reduces the costs of address calculation and software switching. As
the CDFG in Figure 2(a) illustrates, the indices 1 and 2 computed for block A can be
used for block B as well in the first block-iteration. A linear block-processing need not be
regular, as is illustrated for the CDFG in Figure 2(b) which can be block-processed with a
block-processing factor of 2. The computation blocks for this CDFG execute in the order
\Delta. The block-processing is irregular, however, because the computation
blocks process different samples in a given block iteration. In the first block iteration,
for example, block B processes samples 1 and 2 while block A processes samples 2 and 3.
The following lemma gives the necessary and sufficient conditions for achieving linear
and regular block-processing.
be a CDFG. We can achieve a linear and regular block-
processing of G with factor k if and only if for every edge e 2 E, we have
Proof. ()) If Relation (1) is not satisfied for some CDFG that can be block-processed
linearly and regularly with a factor k, then there exists an edge u e
v such that 1 -
can process at most w(e) samples per iteration. Since w(e) ! k,
the remaining k \Gamma w(e) samples must be accessed from the previous block iteration, which
contradicts regularity.
2.3 Retiming
A retiming of a CDFG is an integer valued vertex-labeling r
This integer value denotes the assignment of a lag to each vertex which transforms G into
r i, where for each edge u e
defined by the equation
The retimed CDFG G r is well-formed if and only if for all edges
Several important properties of the retiming transformation stem directly from Relation
(2). One such property that we use repeatedly in the proofs of this paper is that for
any given vertex pair u; v in V , a retiming r changes the original delay count of every path
from u to v by the same amount. To verify this property, we express the post-retiming
delay count w r (p) along any such path p as the sum of the delay counts of its constituent
edges:
since the sum in Equation (4) telescopes. Thus, the change in the delay count of any path
depends only on the endpoints of the path.
A corollary that follows immediately from Equation (4) for is that retiming does
not change the delay count around the directed cycles of a CDFG. Based on this property,
it is straightforward to show that for any given edge e 2 E, the maximum number of delays
that any retiming can place on e cannot exceed
where W (v;
2.4 The k-delay problem
According to Lemma 1, a linear and regular block-processing with factor k can be achieved
only for CDFGs that have exactly 0 or at least k delays on each edge. If a given CDFG does
not satisfy the condition in Relation (1), we can redistribute its delays by retiming so that
all nodes achieve the desired block-processing factor k. We call the problem of computing
such a retiming the k-delay problem:
Problem KDP (The k-delay problem) Given a CDFG and a positive integer
k, compute a retiming function r Z such that for every edge u e
in E, we have
or determine that no such retiming exists.
Problem KDP cannot be expressed directly in a linear programming form because of the
disjunction (or requirement) in Relation (6). In this section we rely on the notion of the
"companion graph" that was described in [10] to express Problem KDP as an Integer Linear
Program (ILP).
The companion graph G of a CDFG G is constructed by segmenting
every edge u e
into two edges
v, where x uv is a dummy vertex.
Thus, we have
and for each edge u e
Figure
3 illustrates the construction of the companion graph.
The following lemma gives the necessary and sufficient conditions that hold for any
retiming that solves Problem KDP.
be a CDFG, and let G its companion graph.
Then there exists a retiming function r that solves Problem KDP on G if and only
if there exists a retiming function r Z such that for every edge u e i
we have
and for every edge u e
in E, we have
F
(a)
(b)
F
Figure
3: Constructing a companion graph. (a) Original CDFG Companion
graph generated by segmenting every edge in E into two edges and
introducing a dummy vertex such that the first edge has a delay count of at most 1. Edge
in G has been segmented to generate edges C w=1 ! XCD and XCD w=2 ! D in G 0 .
Proof. Inequality (7) ensures that the retimed circuit is well-formed. Inequalities (8) and
ensure that the delay counts in G 0
r 0 satisfy the definition of a companion graph. Inequality
ensures that for every edge u e
x uv has one delay after retiming,
then edge x uv
v has at least k \Gamma 1 delays. Thus, if edge u e
in E has any delays, then
it has at least k delays after retiming. By construction, a solution r for Problem KDP on
G can be derived from r 0 by simply setting
The following theorem expresses Problem KDP as a set of O(E) integer linear programming
constraints.
Theorem 3 Let computation flow graph and let G
be its companion graph. Then there exists a retiming function r solves
Problem KDP if and only if there exists a retiming function r Z such that for every
edge
and for every edge u e
in E, we have
Proof. Follows directly from the linearity of Relation (2) and the form of the inequalities
in Lemma 2.
Necessary conditions
The main challenge in solving Problem KDP is to determine which edges should have delays
and which should not. In the ILP formulation of Problem KDP, we determine these edges
explicitly, and the resulting constraints in the formulation do not appear to have any special
structure. We thus need to resort to general integer linear programming solvers to compute
a solution, which can be computationally expensive for large CDFGs. In this section, we
give a set of necessary conditions which determine implicitly which edges should or should
not have delays. In the next section, we develop a branch-and-bound technique based on
these necessary conditions, which is considerably more efficient in practice than the ILP
formulation.
In the following four subsections, we derive necessary conditions for the feasibility of
Problem KDP on any given CDFG. We first derive conditions to ensure that all cycles
have enough delays around them. We then identify paths which must necessarily contain
delays and paths which must necessarily be free of delays. Based on these paths, we derive
necessary conditions for the feasibility of Problem KDP. We finally describe the construction
of a constraints graph which captures explicitly the necessary conditions for the feasibility
of Problem KDP.
4.1 Delays around cycles
Retiming leaves the delay count around cycles unchanged and, therefore, for any given
CDFG G and a block-processing factor k, Problem KDP is feasible only if the delay count
around all cycles in G is greater than k. The following lemma gives a mathematical characterization
of this result for the feasibility of Problem KDP.
be a CDFG. Problem KDP is feasible on G only if for every
vertex
Proof. By contradiction. Let Problem KDP be feasible on G, and let there exist a vertex
pair u; v for which Inequality (15) does not hold. Since W (u; v) the
minimum delay count for any simple cycle through u and v, there exists a directed cycle c
in G that has a delay count less than k. Since retiming does not change the delay count
around cycles, we conclude that c has an edge with delay count between 1 and
every retiming, which contradicts our assumption that Problem KDP is feasible.
For every vertex pair u; v 2 V , W (u; v) can be computed efficiently by an all-pairs
shortest-paths computation in O(V steps. We thus assume for the remainder
of this paper that any given CDFG G already satisfies Inequality (15).
4.2 Paths with delays
Using the property that retiming changes the delay count of paths between a given vertex
pair by the same amount, we determine vertex pairs between which all paths must necessarily
contain delays in any solution to Problem KDP. The following lemma gives a necessary
condition for such vertex pairs.
be a CDFG, and let r Z be a solution to Problem KDP
on G. Then for every vertex pair u; v 2 V such that there exists a path u p
in G with
Proof. By contradiction. Suppose that r solves Problem KDP and that W r (u; v) -
for some vertex pair u; v 2 V which satisfies the condition in the lemma. We will show that
there exists an edge e 2 E for which Relation (6) does not hold.
then the path u q
with minimum delay W r (u; v) has a nonzero delay
count that does not exceed k \Gamma 1. Thus, some edge on this path violates Relation (6).
then for the path u q
in the statement of the lemma, we have
Furthermore, we have
(c)
(b)
(a)
Figure
4: Illustration of explicit and implicit delay-essential (DE) vertex pairs. The CDFG
in Part (a) has been transformed by Algorithm AddEdges to generate the CDFG in Part
(b) and then finally the CDFG in Part (c). The bold edges in Part (b) are between explicit
DE pairs. The weights on these edges indicate the excess delay associated with the corresponding
vertex pairs. The bold edges in Part (c) denote both explicit and implicit DE
pairs. For example, the pair B; D is implicitly DE and becomes apparent only after the DE
vertex pairs B; C and C; D are made explicit.
Thus, p has a nonzero delay count that does not exceed k \Gamma 1, and consequently, some edge
on p violates Relation (6).
The following lemma casts the necessary conditions of Lemma 5 as a retiming problem
on an appropriately constructed constraints graph.
be a given CDFG and let G be the constraints
graph that is generated from G as follows: For every vertex pair u; v 2 V such that there
exists a path u p
in G with delay count w(p) and W (u; v) ! w(p) ! k, add a new edge
Problem KDP on G, then
for every edge u e
Proof. From Relation (2) and the definition of well-formedness, we have that Inequality (17)
holds for every edge e 2 E.
It remains to show that Inequality (17) holds for the edges in the set E \Gamma E. Consider
a vertex pair u; v 2 V that is connected by an edge e For the purpose of
contradiction, suppose that Problem KDP is feasible and that r(v) \Gamma r(u)
the construction of G , we have
Since r solves Problem KDP, Lemma 5 implies that W r (u; v) - k, which contradicts Inequality
(19).
We call a vertex pair u; v 2 V delay-essential (DE) if the shortest path u q
every retimed CDFG that satisfies Relation (6) must contain delays. For example, every
vertex for which there exists a path u p
v such that W (u;
delay-essential, since it satisfies the condition in Lemma 5. It can be shown that it suffices
to compare the delay count of the two shortest paths between a vertex pair to check for the
existence of a path u p
v such that W (u;
Algorithm AddEdges-1 in Figure 5 transforms the given graph G into G . This algorithm
determines delay-essential vertex pairs by checking if the delay counts of the shortest
and the strictly-second shortest path between every vertex pair differ by less than k. For
every delay-essential vertex pair u; v, an edge u e
introduced to ensure that W r (u; v) - k.
It is important to note that in order to determine whether a given vertex pair is delay-
essential, one needs to compare the delay counts of the shortest path and the strictly-second
shortest simple path (that is, a path whose weight is strictly greater than the shortest-path
weight). Although the problem of computing the strictly-second shortest simple path between
a given vertex pair is NP-complete, the corresponding problem without the simple
path requirement can be solved in polynomial time [11]. For graphs that satisfy Inequality
in Lemma 4, it is straightforward to show that if the strictly-second shortest path is
non-simple, then its delay count exceeds that of the shortest path by at least k. Conversely,
if the delay counts of the shortest and the strictly-second shortest paths differ by less than
k, then the strictly-second shortest path is guaranteed to be simple.
The following lemma shows that Algorithm AddEdges-1 runs in polynomial time.
AddEdges-1(G;
1 for every vertex pair u;
do m[u][v] / FALSE
4 Run an all-pairs strictly-second-shortest paths algorithm on G.
5 for every vertex pair u; v 2 V with u p1
being the two shortest paths between u; v
6 do if
7 then m[u][v] / TRUE
8 Introduce u e
return G
Figure
5: Algorithm AddEdges-1 transforms
Lemma 7 In O(V 2 E) steps, Algorithm AddEdges-1 transforms a given CDFG
hV; E;wi into G .
Proof. Steps 1-2 take O(V 2 ) time. Since the all-pairs second-shortest paths can be computed
in O(V E+V E) time [11], Step 4 takes O(V 2 E) time. Steps 5-10 take O(V 2 )
time to complete. Thus, Algorithm AddEdges-1 terminates in O(V 2 E) steps.
Lemma 5 captures only explicit delay requirements and not implicit or hidden require-
ments. Let us assume, for example, that we wish to solve Problem KDP for the CDFG in
Figure
4(a) with 2. Since the shortest and the second-shortest paths between vertices B
and D have 1 and 4 delays, respectively, the condition in Lemma 5 does not apply, and the
does not appear to be delay-essential. We can verify, however, that the shortest
path between B and D must necessarily contain delays in any solution of Problem KDP,
since it is impossible to retime the given CDFG and zero out the delay count of B ! D.
Since the vertex pairs B; C and C; D must satisfy the condition in Lemma 5, they need at
least 2 delays on their shortest paths D. Thus, no delay along the path
can be moved outside B ; D. Since retiming changes the delay along paths
between the same vertex pair in an identical manner, the delay on edge cannot be
moved out of B ; D.
In order to expose implicit delay requirements, we construct a new graph G
all delay-essential vertex pairs are explicit. Algorithm AddEdges-2 in Figure 6
transforms the graph G (generated from a given CDFG G by Algorithm AddEdges-1)
into G T and determines implicit delay-essential vertex pairs. Delay-essential vertex pairs
are determined by comparing, for every vertex pair, the delay counts of the shortest path in
3 repeat
4 for every delay-essential vertex pair u; v 2 Q
5 do m[u][v] / TRUE
6 Introduce edge u e
8 for every delay-free vertex pair u;
9 do m[v][u] / TRUE
Run an all-pairs shortest paths algorithm on G T to compute W T (u; v)
for every vertex pair u;
do if
20 then delete edge u e
! v from G T
22 Q
23 for every pair u;
do if
26 elseif w
then delete edge v e
! u from G T
28 m[v][u] / FALSE
Figure
Algorithm AddEdges-2 transforms G
steps. In the new graph G T , all delay-essential and all delay-free vertex pairs
of G are explicit.
the transformed graph of the current iteration and the shortest path in the original graph.
If for a vertex pair u; v the delay counts of these two paths differ, then an edge u e
weight is introduced to ensure that W r (u; v) - k. (It can be shown
that is is sufficient to place the additional edge only if the delay counts for the two shortest
paths differ by less than k. If their difference exceeds k, then the condition W r (u; v) - k is
implicitly taken care of.) Intuitively, W (u; the excess delay of the pair u;
and gives an upper bound on the number of delays that can be "contributed" by that pair
to the rest of the graph. As new edges are introduced, new vertex pairs can become delay-
essential, as shown in Figure 4. For example, the pair B; D becomes delay-essential only
after the delay requirements of the pairs B; C and C; D become explicit.
The following lemma proves that the constraints introduced for the delay-essential vertex
pairs in each iteration of Algorithm AddEdges-2 are necessary for Problem KDP.
be a CDFG, and let G be the transformed graph
generated by the repeat loop of Algorithm AddEdges-2 after i iterations. Let G
be the graph generated after iterations by augmenting E i as follows:
For every vertex pair u; v), an edge u e
is introduced in E i . Let r Z be a solution for Problem KDP
on G. If for every edge u e
then for every edge u e
Proof. Let u e
. In this case, we have w Inequality (20)
follows immediately from Inequality (19).
Let u e
By construction, we have W (u; v) ? W i (u; v). Therefore
r
Adding up the left and right-hand side parts of Inequality (19) along the edges of the shortest
path from u to v in G i , we obtain W i
r (u; v) - 0. Inequality (22) implies that W r (u; v) ? 0,
and since r is a solution to Problem KDP, we infer that
(Otherwise some edge along the shortest path in G r would contain fewer than k delays.)
Therefore, for the edge u e
Thus, r satisfies Inequality (20).
4.3 Paths without delays
In contrast to delay-essential paths, we have paths which must contain no delays in any
solution. We call a vertex pair u; v 2 V delay-free (DF) if the shortest path u q
in every retimed CDFG that satisfies Relation (6) must contain no delays. For example,
if G T is the constraints graph constructed from G, then any vertex pair u; v with
is delay-free, since retiming does not change the delay count
around cycles and cannot result in W r (u; v) - k. As a result, for every such vertex pair,
the condition W r (u; must hold, otherwise Relation (6) will be violated for some edge
along the shortest path u p
During the construction of graph G T by Algorithm AddEdges-2, delay-free vertex pairs
are determined by checking for every vertex pair u; v whether
If so, an edge v e
is introduced to ensure that W r (u;
This process is repeated until every delay-free vertex pair is made explicit by these additional
edges. For example, in Figure 7(a), the pair A; B is delay-essential, and we introduce a bold
edge This new edge introduces the cycle
which has fewer than k delays, and thus the vertex pair B; A must be delay-free. To enforce
this constraint, the weight on the bold edge A ! B is changed to \GammaW (B;
The following lemma proves that each iteration of Algorithm AddEdges-2 introduces
constraints that are necessary for Problem KDP to be feasible.
Lemma 9 Let be a given CDFG, and let G be the transformed
graph generated by the repeat loop of Algorithm AddEdges after i iterations. Let G
be the graph generated after iterations by augmenting E i as follows:
For every vertex pair u; k, an edge v e
C(a) (b)
Figure
7: Delay-free path for a given CDFG with 2. The cycle formed by
in (a) has less than k delays. Therefore B; A must be delay-free and the weight on
the bold edge is changed to \GammaW (B; in (b) to achieve this.
is introduced in E i . Let r Z be a solution for Problem KDP on
G. If for every edge u e
then for every edge u e
Proof. If u e
Inequality (23) follows
immediately from Inequality (22).
Now, consider an edge v e
. For the vertex pair u; v 2 V , we have
by the construction of E i+1 . Adding up the parts of Inequality (22)
along the edges of the shortest path from v to u in E i , we obtain W i
r (v; u) - 0. Therefore
r (v; u)
Since r solves Problem KDP, the last inequality implies that W r (u;
then some edge along the shortest path from u to v in E contains fewer than k delays.)
Thus, for the edge v e
and r satisfies Inequality (20).
4.4 Constraints graph generation
The necessary conditions in Lemmas 5, 8, and 9 are encoded by the edges and edge-weights
of the constraints graph G generated by Algorithm AddEdges-2. For each
DE vertex pair u; v 2 G, this algorithm introduces an edge u e
k. Moreover, for each DF vertex pair u; introduces an edge v e
weight w T v). The following lemma summarizes the necessary conditions for
the feasibility of Problem KDP on a given CDFG G in terms of the transformed graph G T .
be a CDFG, and let G be the transformed
graph generated by Algorithm AddEdges-2. Let r Z be a solution for Problem KDP
on G. Then for every edge u e
Proof. Follows directly from Lemmas 8 and 9.
The following lemma gives the running time of Algorithm AddEdges-2.
Lemma 11 In O(V 3 E+V 4 lg V ) steps, Algorithm AddEdges-2 transforms a given CDFG
into G T or determines that such a transformation is not possible.
Proof. The repeat loop in Step 3 can execute O(V 2 ) times in the worst case when in
each iteration only one additional edge between a DE or a DF vertex pair gets added
or modified to the graph. (It can be shown that the delay count of an additional edge
gets modified at most once). All for loops in Algorithm AddEdges-2 execute in O(V 2 )
steps since we have at most O(V 2 ) vertex pairs. Step 15 takes O(V
using Johnson's algorithm for computing the all-pairs shortest paths [4]. Thus the body
of the repeat loop completes in O(V
5 A practical branch-and-bound algorithm
In this section, we describe an efficient branch-and-bound scheme for solving Problem KDP.
Our scheme relies on the necessary conditions derived in Section 4 to effectively prune the
search space while computing a solution.
(G;
1 for every vertex
do r[u] / 0
6 then return r
7 else return INFEASIBLE
Figure
8: Algorithm SolveKDP for solving Problem KDP.
Figure
8 describes our Algorithm SolveKDP for Problem KDP. After generating the
constraints graph G T , our algorithm initializes r and searches for a solution using the
procedure Branch-and-Bound that is described in Figure 9. The recursive procedure
Branch-and-Bound computes a retiming r that satisfies the constraints in G T . If there
exists a violating edge e in the retimed graph G r (that is, an edge with a delay count
between 1 and k \Gamma 1), Algorithm Branch-and-Bound adds constraints in G T that force e
to have at least k delays. It subsequently computes a retiming that satisfies the augmented
constraints set. This step is repeated until a solution is found or until we obtain a set
of necessary conditions that cannot be satisfied by any retiming, in which case Algorithm
Branch-and-Bound backtracks. In each backtracking step, the state of the constraints
graph is restored, and a new constraint is added that forces the violating edge e to take a
delay count of zero.
For a given CDFG G, the optimal block-processing factor kmax is the largest number of
samples which can be processed successively by all the computation blocks of G. This number
equals the maximum number of delays that can be placed on any edge and is bounded
from above by F . Thus, kmax can be determined by a binary search over the integers in
the range [1; F ]. The feasibility of each value is checked using Algorithm SolveKDP.
6 Experimental Results
We have developed three programs for computing the optimal block-processing factor kmax .
In all three programs kmax is determined by a binary search. In this section, we present
results from the application of our programs on real and synthetic DSP computations. The
purpose of our experiments was to determine by how much block-processing speeds up
computations, to compare the efficiency of our different implementations, and to evaluate
the effectiveness of our necessary conditions.
The first program (ILP) solves the integer linear programming formulation of Problem
Branch-and-Bound
that satisfies Inequality (24)
exists
3 then return FAIL
with
6 do Save G T
and r
8 Introduce edge u e
12 then return SUCCESS
else Restore G T
and r
19 then return SUCCESS
else Restore G T
and r
22 return FAIL
return SUCCESS
Figure
9: Algorithm Branch-and-Bound which is called by Algorithm SolveKDP for
solving Problem KDP.
KDP that we described in Section 3. It first generates the ILP constraints and then solves
the integer program separately using lp solve, a public-domain mixed-integer linear programming
solver [1]. Our second program (NC-ILP) first checks the necessary conditions
given in Section 4 to screen out infeasible problems. The problems that satisfy the necessary
conditions are then solved by an ILP formulation which is fed to lp solve. Our third
program (BB) is an implementation of Algorithm SolveKDP given in Section 5. This
branch-and-bound scheme relies on the necessary conditions from Section 4 to effectively
prune the search space.
In order to explore the computational speedup possible with block-processing, we applied
our k-delay optimization to the computation dataflow graphs of four real DSP programs.
Our test suite comprised an adaptive voice echo canceler, an adaptive video coder, and two
examples from [18]. The size of the CDFGs of these DSP programs ranged from 10 to 25
nodes.
The results of our speedup experiments are given in the table of Figure 10. These data
have been obtained for uniprocessor implementations. For each CDFG, the improvement is
Design # cycles with kmax # cycles with Improvement (%)
original CDFG optimized CDFG
Echo Canceler 1215 3 840 31
Figure
10: Experimental results for uniprocessor implementations.
given by the fraction
# cycles with optimized CDFG
# cycles with original CDFG :
After k-delay retiming, the reduction S achieved in execution time is given by the fraction
where O is the sum of the context-switching overheads of all nodes, and C is the sum of
all computation times. In our experiments, initiation and computation times were obtained
using measurements on typical DSP general purpose processors, such as TMS32020 and
Motorola 56000.
In order to evaluate the efficiency of our implementations, we experimented with large
synthetic graphs in addition to the real DSP programs. The synthetic graphs in our test
suite were generated using the sprand function of the random graph generator described
in [3]. All the graphs generated using sprand were connected and had integer edge weights
chosen uniformly in the range [0, 5]. Given the number of vertices and edges desired,
sprand generates graphs by randomly placing edges between vertices and by randomly
assigning weights from the specified range. The size of these computation dataflow graphs
was between 10 to 300 vertices and 20 to 750 edges.
The results from the application of our three programs on our synthetic test suite are
summarized in Figure 11. Our experiments were conducted on a SPARC10 with 64MB of
main memory. The CPU times for the three programs are for computing kmax . Our results
show that ILP is very inefficient and its running time becomes impractical for graphs with
more than vertices and 70 edges. ILP searches the entire solution space before detecting
infeasible solutions. During the binary search, solutions to feasible problems are computed
relatively fast, but not as fast as in the other two programs. Furthermore, the detection of
infeasible problems is extremely time-consuming.
NC-ILP is more efficient than ILP, primarily due to the quick screening of infeasible
problems based on the necessary conditions from Section 4. However, NC-ILP cannot
name ILP NC-ILP BB
Figure
11: Comparison of running times (in CPU seconds) taken by ILP, NC-ILP, and
BB to compute kmax for random graphs. Entries marked with "-" indicate running times
exceeding 30,000 cpu seconds.
handle efficiently any CDFG that has more than 50 nodes.
BB is the most efficient of our three programs and is orders of magnitude faster than
ILP or NC-ILP. Moreover, it can handle graphs that are at least one order of magnitude
larger than the graphs handled by ILP or NC-ILP. We thus conclude that our necessary
conditions are very effective in pruning the search space.
7 Conclusion and future work
Block-processing speeds up the execution of computations by amortizing context switching
overheads over several data samples. In this paper, we investigated the problem of improving
the block-processing factor of DSP programs using the retiming transformation. We
formulated the problem of computing a retiming that achieves a given block-processing factor
k as an integer linear program. We then presented a set of necessary conditions for this
problem that can be computed in polynomial time. Based on these conditions, we designed
a branch-and-bound scheme for computing regular and linear block-processings. In our experiments
with real and synthetic computation graphs, our branch-and-bound scheme was
orders of magnitude more efficient than the general integer linear programming approaches.
Thus, our necessary conditions proved to be particularly powerful in pruning the search
tree of our branch-and-bound scheme.
An important question that remains open is whether our necessary conditions are also
sufficient. So far, we have not been able to prove sufficiency. On the other hand, we have
not discovered a situation in which our necessary conditions are feasible yet the k-delay
problem is infeasible. We nevertheless conjecture that our necessary conditions are not
sufficient.
An interesting direction for further investigation is the reduction of critical path length
in conjunction with the maximization of the block-processing factor. Our preliminary work
in the area shows that it is possible to express critical path requirements in the form of
constraint edges in the transformed graph G T .
Future work in this area could explore the applicability of our techniques for compiling
code on Very Long Instruction Word (VLIW) architectures. The main challenge with VLIW
machines is to issue as many instructions as possible in the same clock cycle. By viewing
the instructions in a given program as delay elements in a computation graph, one could
model the compilation problem for VLIW architectures as a block-processing problem on a
CDFG.
--R
lp solve: A mixed-integer linear programming solver
Fast Algorithms for Digital Signal Processing.
Shortest paths algorithms: theory and experimental evaluation.
Introduction to Algorithms.
Software's chronic crisis.
Optimizing two-phase
Relative scheduling under timing constraints: Algorithms for high-level synthesis of digital circuits
VLSI Array Processors.
DelaY: An efficient tool for retiming with realistic delay modeling.
Computing strictly-second shortest paths
Retiming synchronous circuitry.
Storage assignment to decrease code size.
Retiming and resynthesis: Optimizing sequential networks with combinational techniques.
Synchronous logic synthesis: Algorithms for cycle-time minimization
Optimum vectorization of scalable synchronous dataflow graphs.
Behavioral transformations for algorithmic level IC design.
Retiming of DSP programs for optimum vec- torization
--TR
VLSI array processors
Introduction to algorithms
Storage assignment to decrease code size
The practical application of retiming to the design of high-performance systems
Computing strictly-second shortest paths
Fast Algorithms for Digital Signal Processing
--CTR
Dong-Ik Ko , Shuvra S. Bhattacharyya, Modeling of Block-Based DSP Systems, Journal of VLSI Signal Processing Systems, v.40 n.3, p.289-299, July 2005
Ming-Yung Ko , Chung-Ching Shen , Shuvra S. Bhattachryya, Memory-constrained block processing for DSP software optimization, Journal of Signal Processing Systems, v.50 n.2, p.163-177, February 2008
Ming-Yung Ko , Praveen K. Murthy , Shuvra S. Bhattacharyya, Beyond single-appearance schedules: Efficient DSP software synthesis using nested procedure calls, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.2, p.14-es, May 2007 | retiming;combinatorial optimization;computation dataflow graphs;embedded systems;integer linear programming;scheduling;high-level synthesis;vectorization |
348754 | Fast and flexible word searching on compressed text. | We present a fast compression technique for natural language texts. The novelties are that (1) decompression of arbitrary portions of the text can be done very efficiently, (2) exact search for words and phrases can be done on the compressed text directly, using any known sequential pattern-matching algorithm, and (3) word-based approximate and extended search can also be done efficiently without any decoding. The compression scheme uses a semistatic word-based model and a Huffman code where the coding alphabet is byte-oriented rather than bit-oriented. We compress typical English texts to about 30% of their original size, against 40% and 35% for Compress and Gzip, respectively. Compression time is close to that of Compress and approximately half of the time of Gzip, and decompression time is lower than that of Gzip and one third of that of Compress. We present three algorithms to search the compressed text. They allow a large number of variations over the basic word and phrase search capability, such as sets of characters, arbitrary regular expressions, and approximate matching. Separators and stopwords can be discarded at search time without significantly increasing the cost. When searching for simple words, the experiments show that running our algorithms on a compressed text is twice as fast as running the best existing software on the uncompressed version of the same text. When searching complex or approximate patterns, our algorithms are up to 8 times faster than the search on uncompressed text. We also discuss the impact of our technique in inverted files pointing to logical blocks and argue for the possibility of keeping the text compressed all the time, decompressing only for displaying purposes. | INTRODUCTION
In this paper we present an efficient compression technique for natural language
texts that allows fast and flexible searching of words and phrases. To search for
simple words and phrases, the patterns are compressed and the search proceeds
without any decoding of the compressed text. Searching words and phrases that
match complex expressions and/or allowing errors can be done on the compressed
text at almost the same cost of simple searches. The reduced size of the compressed
text makes the overall searching time much smaller than on plain uncompressed
text. The compression and decompression speeds and the amount of compression
achieved are very good when compared to well known algorithms in the literature
[Ziv and Lempel 1977; Ziv and Lempel 1978].
The compression scheme presented in this paper is a variant of the word-based
Huffman code [Bentley et al. 1986; Moffat 1989; Witten et al. 1999]. The Huffman
codeword assigned to each text word is a sequence of whole bytes and the Huffman
tree has degree either 128 (which we call "tagged Huffman code") or 256 (which we
call "plain Huffman code"), instead of 2. In tagged Huffman coding each byte uses
7 bits for the Huffman code and 1 bit to signal the beginning of a codeword. As we
show later, using bytes instead of bits does not significantly degrade the amount of
compression. In practice, byte processing is much faster than bit processing because
bit shifts and masking operations are not necessary at compression, decompression
and search times. The decompression can start at any point in the compressed file.
In particular, the compression scheme allows fast decompression of fragments that
contain the search results, which is an important feature in information retrieval
systems.
Notice that our compression scheme is designed for large natural language texts
containing at least 1 megabyte to achieve an attractive amount of compression.
Also, the search algorithms are word oriented as the pattern is a sequence of elements
to be matched to a sequence of text words. Each pattern element can be a
simple word or a complex expression, and the search can be exact or allowing errors
in the match. In this context, we present three search algorithms.
The first algorithm, based on tagged Huffman coding, compresses the pattern
and then searches for the compressed pattern directly in the compressed text. The
search can start from any point in the compressed text because all the bytes that
start a codeword are marked with their highest bit set in 1. Any conventional
pattern matching algorithm can be used for exact searching and a multi-pattern
matching algorithm is used for searching allowing errors, as explained later on.
The second algorithm searches on a plain Huffman code and is based on a word-oriented
Shift-Or algorithm [Baeza-Yates and Gonnet 1992]. In this case the com-
Fast and Flexible Word Searching on Compressed Text \Delta 3
pression obtained is better than with tagged Huffman code because the search
algorithm does not need any special marks on the compressed text.
The third algorithm is a combination of the previous ones, where the pattern
is compressed and directly searched in the text as in the first algorithm based on
tagged Huffman coding. However, it works on plain Huffman code, where there
is no signal of codeword beginnings, and therefore the second algorithm is used to
check a surrounding area in order to verify the validity of the matches found.
The three algorithms allow a large number of variations over the basic word and
phrase searching capability, which we group under the generic name of extended
patterns. As a result, classes of characters including character ranges and com-
plements, wild cards, and arbitrary regular expressions can be efficiently searched
exactly or allowing errors in the occurrences. Separators and very common words
(stopwords) can be discarded without significantly increasing the search cost.
The algorithms also allow "approximate phrase matching". They are able to
search in the compressed text for approximate occurrences of a phrase pattern allowing
insertions, deletions or replacements of words. Approximate phrase matching
can capture different writing styles and therefore improve the quality of the
answers to the query. Our algorithms are able to perform this type of search at the
same cost of the other cases, which is extremely difficult on uncompressed search.
Our technique is not only useful to speed up sequential search. It can also be used
to improve indexed schemes that combine inverted files and sequential search, like
Glimpse [Manber and Wu 1993]. In fact, the techniques that we present here can
nicely be integrated to the inverted file technology to obtain lower space-overhead
indexes. Moreover, we argue in favor of keeping the text compressed all the time,
so the text compression cannot be considered an extra effort anymore.
The algorithms presented in this paper are being used in a software package called
Cgrep. Cgrep is an exact and approximate compressed matching tool for large text
collections. The software is available from ftp://dcc.ufmg.br/latin/cgrep, as
a prototype. Preliminary partial versions of this article appeared in [Moura et al.
1998a; Moura et al. 1998b].
This paper is organized as follows. In Section 2 we discuss the basic concepts
and present the related work found in the literature. In Section 3 we present our
compression and decompression method, followed by analytical and experimental
results. In Section 4 we show how to perform exact and extended searching on
tagged Huffman compressed texts. In Section 5 we show how to perform exact and
extended searching on plain Huffman compressed texts. In Section 6 we present
experimental results about the search performance. Finally, in Section 7 we present
conclusions and suggestions for future work.
2. BASICS AND RELATED WORK
Text compression is about exploiting redundancies in the text to represent it in less
space [Bell et al. 1990]. In this paper we denote the uncompressed file as T and its
length in bytes as u. The compressed file is denoted as Z and its length in bytes
as n. Compression ratio is used in this paper to denote the size of the compressed
file as a percentage of the uncompressed file (i.e. 100 \Theta n=u).
From the many existing compression techniques known in the literature we emphasize
only the two that are relevant for this paper. A first technique of interest
de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
is the Ziv-Lempel family of compression algorithms, where repeated substrings of
arbitrary length are identified in the text and the repetitions are replaced by pointers
to their previous occurrences. In these methods it is possible that
achieving
u) and even in the best cases.
A second technique is what we call "zero-order substitution" methods. The
text is split into symbols and each symbol is represented by a unique codeword.
Compression is achieved by assigning shorter codewords to more frequent symbols.
The best known technique of this kind is the minimum redundancy code, also called
Huffman code [Huffman 1952]. In Huffman coding, the codeword for each symbol is
a sequence of bits so that no codeword is a prefix of another codeword and the total
length of the compressed file is minimized. In zero-order substitution methods we
have even though the constant can be smaller than 1. Moreover, there are
\Theta(u) symbols in a text of u characters (bytes) and \Theta(n) codewords in a compressed
text of n bytes. In this work, for example, we use O(u) to denote the number of
words in T .
The compressed matching problem was first defined in the work of Amir and
Benson [Amir and Benson 1992] as the task of performing string matching in a
compressed text without decompressing it. Given a text T , a corresponding compressed
string Z, and an (uncompressed) pattern P of length m, the compressed
matching problem consists in finding all occurrences of P in T , using only P and
Z. A naive algorithm, which first decompresses the string Z and then performs
standard string matching, takes time O(u+m). An optimal algorithm takes worst-case
m). In [Amir et al. 1996], a new criterion, called extra space,
for evaluating compressed matching algorithms, was introduced. According to the
extra space criterion, algorithms should use at most O(n) extra space, optimally
O(m) in addition to the n-length compressed file.
The first compressed pattern matching algorithms dealt with Ziv-Lempel compressed
text. In [Farach and Thorup 1995] was presented a compressed matching
algorithm for the LZ1 classic compression scheme [Ziv and Lempel 1976] that runs
in O(n log 2 (u=n)+m) time. In [Amir et al. 1996], a compressed matching algorithm
for the LZ78 compression scheme was presented, which finds the first occurrence in
O(n space, or in O(n log m+m) time and in O(n +m) space. An
extension of [Amir et al. 1996] to multipattern searching was presented in [Kida
et al. 1998], together with the first experimental results in this area. New practical
results appeared in [Navarro and Raffinot 1999], which presented a general scheme
to search on Ziv-Lempel compressed texts (simple and extended patterns) and implemented
it for the particular cases of LZ77, LZ78 and a new variant proposed
which was competitive and convenient for search purposes. A similar result, restricted
to the LZW format, was independently found and presented in [Kida et al.
1999]. Finally, [Kida et al. 1999] generalized the existing algorithms and nicely
unified the concepts in a general framework.
All the empirical results obtained roughly coincide in a general figure: searching
on a Ziv-Lempel compressed text can take half the time of decompressing that text
and then searching it. However, the compressed search is twice as slow as just
searching the uncompressed version of the text. That is, the search algorithms are
useful if the text has to be kept compressed anyway, but they do not give an extra
reason to compress. The compression ratios are about 30% to 40% in practice when
Fast and Flexible Word Searching on Compressed Text \Delta 5
a text is compressed using Ziv-Lempel.
A second paradigm is zero-order substitution methods. As explained,
this model, and therefore the theoretical definition of compressed pattern matching
makes little sense because it is based in distinguishing O(u) from O(n) time. The
goals here, as well as the existing approaches, are more practical: search directly
the compressed text faster than the uncompressed text, taking advantage of its
smaller size.
A first text compression scheme that allowed direct searching on compressed text
was proposed by Manber [Manber 1997]. This approach packs pairs of frequent
characters in a single byte, leading to a compression ratio of approximately 70%
for typical text files.
A particularly successful trend inside zero-order substitution methods has been
Huffman coding where the text words are considered the symbols that compose
the text. The semi-static version of the model is used, that is, the frequencies of
the text symbols is learned in a first pass over the text and the text is coded in
a second pass. The table of codewords assigned to each symbol is stored together
with the compressed file. This model is better suited to typical information retrieval
scenarios on large text databases, mainly because the data structures can
be shared (the vocabulary of the text is almost the same as the symbol table of
the compressor), local decompression is efficient, and better compression and faster
search algorithms are obtained (it is possible to search faster on the compressed
than on the uncompressed text). The need for two passes over the text is normally
already present when indexing text in information retrieval applications, and the
overhead of storing the text vocabulary is negligible for large texts. On the other
hand, the approach is limited to word-based searching on large natural language
texts, unlike the Ziv-Lempel approach.
To this paradigm belongs [Turpin and Moffat 1997], a work developed independently
of our work. The paper presents an algorithm to search on texts compressed
by a word-based Huffman method, allowing only exact searching for one-word pat-
terns. The idea is to search for the compressed pattern codeword in the compressed
text.
Our work is based on a similar idea, but uses bytes instead of bits for the coding
alphabet. The use of bytes presents a small loss in the compression ratio and the
gains in decompression and search efficiency are large. We also extend the search
capabilities to phrases, classes of characters, wild cards, regular expressions, exactly
or allowing errors (also called "approximate string matching").
The approximate string matching problem is to find all substrings in a text
database that are at a given "distance" k or less from a pattern P . The distance
between two strings is the minimum number of insertions, deletions or substitutions
of single characters in the strings that are needed to make them equal. The case in
which corresponds to the classical exact matching problem.
Approximate string matching is a particularly interesting case of extended pattern
searching. The technique is useful to recover from typing, spelling and optical
character recognition errors. The problem of searching a pattern in a compressed
text allowing errors is an open problem in [Amir et al. 1996]. We partially solve
this problem, since we allow approximate word searching. That is, we can find text
words that match a pattern word with at most k errors. Note the limitations of this
6 \Delta E. S. de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
statement: if a single error inserts a space in the middle of "flower", the result
is a sequence of two words, "flo" and "wer", none of which can be retrieved by
the pattern "flowers" allowing one error. A similar problem appears if a space
deletion converts "many flowers" into a single word.
The best known software to search uncompressed text with or without errors is
Agrep [Wu and Manber 1992]. We show that our compressed pattern matching
algorithms compare favorably against Agrep, being up to 8 times faster depending
on the type of search pattern. Of course Agrep is not limited to word searching and
does not need to compress the file prior to searching. However, this last argument
can in fact be used in the other direction: we argue that thanks to our search
algorithms and to new techniques to update the compressed text, the text files can
be kept compressed all the time and be decompressed only for displaying purposes.
This leads to an economy of space and improved overall efficiency.
For all the experimental results of this paper we used natural language texts
from the trec collection [Harman 1995]. We have chosen the following texts: ap -
Newswire (1989), doe - Short abstracts from DOE publications, fr - Federal Register
(1989), wsj - Wall Street Journal (1987, 1988, 1989) and ziff - articles from
Computer Selected disks (Ziff-Davis Publishing). Table 1 presents some statistics
about the five text files. We considered a word as a contiguous maximal string of
characters in the set fA: : :Z, a: : :z, 0: : :9g. All tests were run on a SUN SparcStation
4 with 96 megabytes of RAM running Solaris 2.5.1.
Files Text Vocabulary Vocab./Text
Size (bytes) #Words Size (bytes) #Words Size #Words
ap 237,766,005 38,977,670 1,564,050 209,272 0.65% 0.53%
doe 181,871,525 28,505,125 1,949,140 235,133 1.07% 0.82%
wsj 262,757,554 42,710,250 1,549,131 208,005 0.59% 0.48%
ziff 242,660,178 39,675,248 1,826,349 255,107 0.75% 0.64%
Table
1. Some statistics of the text files used from the trec collection.
3. THE COMPRESSION SCHEME
General compression methods are typically adaptive as they allow the compression
to be carried out in one pass and there is no need to keep separately the parameters
to be used at decompression time. However, for natural language texts used in a
full-text retrieval context, adaptive modeling is not the most effective compression
technique.
Following [Moffat 1989; Witten et al. 1999], we chose to use word-based semi-static
modeling and Huffman coding [Huffman 1952]. In a semi-static model the
encoder makes a first pass over the text to obtain the frequency of each different text
word and performs the actual compression in a second pass. There is one strong
reason for using this combination of modeling and coding. The data structures
associated with them include the list of words that compose the vocabulary of the
text, which we use to derive our compressed matching algorithm. Other important
Fast and Flexible Word Searching on Compressed Text \Delta 7
rose00000
each
a
is
for
for each rose, a rose is a rose
Original Text:
Compressed Text:
Fig. 1. A canonical tree and a compression example using binary Huffman coding for spaceless
words.
reasons in text retrieval applications are that decompression is faster on semi-static
models, and that the compressed text can be accessed randomly without having
to decompress the whole text as in adaptive methods. Furthermore, previous experiments
have shown that word-based methods give good compression ratios for
natural language texts [Bentley et al. 1986; Moffat 1989; Horspool and Cormack
1992].
Since the text is not only composed of words but also of separators, a model must
also be chosen for them. In [Moffat 1989; Bell et al. 1993] two different alphabets
are used: one for words and one for separators. Since a strict alternating property
holds, there is no confusion about which alphabet to use once it is known that the
text starts with word or separator.
We use a variant of this method to deal with words and separators that we call
spaceless words. If a word is followed by a space, we just encode the word. If
not, we encode the word and then the separator. At decoding time, we decode a
word and assume that a space follows, except if the next symbol corresponds to a
separator. In this case the alternating property does not hold and a single coding
alphabet is used. This idea was firstly presented in [Moura et al. 1997], where it
is shown that the spaceless word model achieves slightly better compression ratios.
Figure
1 presents an example of compression using Huffman coding for spaceless
words method. The set of symbols in this case is f"a", "each", "is", "for",
"rose", ",t"g, whose frequencies are 2, 1, 1, 1, 3, 1, respectively.
The number of Huffman trees for a given probability distribution is quite large.
The preferred choice for most applications is the canonical tree, defined by Schwartz
and Kallick [Schwartz and Kallick 1964]. The Huffman tree of Figure 1 is a canonical
tree. It allows more efficiency at decoding time with less memory requirement.
Many properties of the canonical codes are mentioned in [Hirschberg and Lelewer
1990; Zobel and Moffat 1995; Witten et al. 1999].
3.1 Byte-Oriented Huffman Code
The original method proposed by Huffman [Huffman 1952] is mostly used as a
binary code. That is, each symbol of the input stream is coded as a sequence of
bits. In this work the Huffman codeword assigned to each text word is a sequence
of whole bytes and the Huffman tree has degree either 128 (in this case the eighth
de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
bit is used as a special mark to aid the search) or 256, instead of 2. In all cases
from now on, except otherwise stated, we consider that
-the words and separators of the text are the symbols,
-the separators are codified using the spaceless word model,
-canonical trees are used,
-and the symbol table, which is the vocabulary of the different text words and
separators, is kept compressed using the classical binary Huffman coding on characters
We now define the different types of Huffman codes used in this work, all of which
adhere to the above points.
Binary Huffman Code A sequence of bits is assigned to each word or separator.
Byte Huffman Code A sequence of bytes is assigned to each word or separator.
This encompasses the two coding schemes that follow.
Plain Huffman Code A byte Huffman coding where all the bits of the bytes are
used. That is, the Huffman tree has degree 256.
Tagged Huffman Code A byte Huffman coding where only the 7 lower order
bits of each byte are used. That is, the Huffman tree has degree 128. The
highest bit of each byte is used as follows: the first byte of each codeword has
the highest bit in 1, while the other bytes have their highest bit in 0. This is
useful for direct searching on the compressed text, as explained later.
All the techniques for efficient encoding and decoding mentioned in [Zobel and
Moffat 1995] can easily be extended to our case. As we show later in the experimental
results section no significant degradation of the compression ratio is experienced
by using bytes instead of bits. On the other hand, decompression of byte Huffman
code is faster than decompression of binary Huffman code. In practice, byte processing
is much faster than bit processing because bit shifts and masking operations
are not necessary at decoding time or at searching time.
3.2 Compression Ratio
In this section we consider the compression ratios achieved with this scheme. A
first concern is that Huffman coding needs to store, together with the compressed
file, a table with all the text symbols. As we use word compression, this table is
precisely the vocabulary of the text, that is, the set of all different text words. This
table can in principle be very large and ruin the overall compression ratio.
However, this is not the case on large texts. Heaps' Law [Heaps 1978], an empirical
law widely accepted in information retrieval, establishes that a natural language
text of O(u) words has a vocabulary of size Typically,
fi is between 0.4 and 0.6 [Ara'ujo et al. 1997; Moura et al. 1997], and therefore v is
close to O(
u).
Hence, for large texts the overhead of storing the vocabulary is minimal. On
the other hand, storing the vocabulary represents an important overhead when the
text is small. This is why we chose to compress the vocabulary (that is, the symbol
table) using classical binary Huffman on characters. As shown in Figure 2, this
fact makes our compressor better than Gzip for files of at least 1 megabyte instead
Fast and Flexible Word Searching on Compressed Text
Compression
File Size(megabytes)
Plain Huffman (uncompressed vocabulary)
Plain Huffman(compressed vocabulary)
Compress
Gzip
Fig. 2. Compression ratios for the wsj file compressed by Gzip, Compress, and plain Huffman
with and without compressing the vocabulary.
of . The need to decompress the vocabulary at search time poses
a minimal processing overhead which can even be completely compensated by the
reduced I/O.
A second concern is whether the compression ratio can or cannot worsen as the
text grows. Since in our model the number of symbols v grows (albeit sublinearly) as
the text grows, it could be possible that the average length to code a symbol grows
too. The key to prove that this does not happen is to show that the distribution of
words in the text is biased enough for the entropy 2 to be O(1), and then to show
that Huffman codes put only a constant overhead over this entropy. This final step
will be done for d-ary Huffman codes, which includes our 7-bit (tagged) and 8-bit
cases.
We use the Zipf's Law [Zipf 1949] as our model of the frequency of the words
appearing in natural language texts. This law, widely accepted in information
retrieval, states that if we order the v words of a natural language text in decreasing
order of probability, then the probability of the first word is i ' times the probability
of the i-th word, for every i. This means that the probability of the i-th word is
1=j ' . The constant ' depends on the text.
Zipf's Law comes in two flavors. A simplified form assumes that In
this case, v). Although this simplified form is popular because it is
simpler to handle mathematically, it does not follow well the real distribution of
natural language texts. There is strong evidence that most real texts have in fact
a more biased vocabulary. We performed in [Ara'ujo et al. 1997] a thorough set
of experiments on the trec collection, finding out that the ' values are roughly
between 1.5 and 2.0 depending on the text, which gives experimental evidence
in favor of the "generalized Zipf's Law'' (i.e. ' ? 1). Under this assumption,
1 The reason why both Ziv-Lempel compressors do not improve for larger texts is in part because
they search for repetitions only in a relatively short window of the text already seen. Hence, they
are prevented from exploiting most of the already processed part of the text.
We estimate the zero-order word-based binary entropy of a text as \Gamma
i is the relative frequency of the i-th vocabulary word. For simplicity we call this measure just
"entropy" in this paper.
de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
We have tested the distribution of the separators as well, finding that they also
follow reasonably well a Zipf's distribution. Moreover, their distribution is even
more biased than that of words, being ' closer to 1.9. We therefore assume that
only words, since an analogous proof will hold for separators.
On the other hand, more refined versions of Zipf's Law exist, such as the Mandelbrot
distribution [Gonnet and Baeza-Yates 1991]. This law tries to improve the
fit of Zipf's Law for the most frequent values. However, it is mathematically harder
to handle and it does not alter the asymptotic results that follow.
We analyze the entropy E(d) of such distribution for a vocabulary of v words
when d digits are used in the coding alphabet, as follows:
log dp i
log d
Bounding the summation with an integral, we have that
which allows us to conclude that E(d) = O(1), as log d
H is also O(1).
If we used the simple Zipf's Law instead, the result would be that E(d) =
O(log v), i.e., the average codeword length would grow as the text grows. The
fact that this does not happen for 1 gigabyte of text is an independent experimental
confirmation of the validity of the generalized Zipf's Law against its simple
version.
We consider the overhead of Huffman coding over the entropy. Huffman coding
is not optimal because of its inability to represent fractional parts of bits. That
is, if a symbol has probability p i , it should use exactly log 2 (1=p i ) bits to represent
the symbol, which is not possible if p i is not a power of 1=2. This effect gets worse
if instead of bits we use numbers in base d. We give now an upper bound on the
compression inefficiency involved.
In the worst case, Huffman will encode each symbol with probability p i using
dlog d
digits. This is a worst case because some symbols are encoded using
blog d
digits. Therefore, in the worst case the average length of a codeword
in the compressed text is
which shows that, regardless of the probability distribution, we cannot spend more
than one extra digit per codeword due to rounding overheads. For instance, if we
use bytes we spend at most one more byte per word.
This proves that the compression ratio will not degrade as the text grows, even
when the number of different words and separators increases.
Fast and Flexible Word Searching on Compressed Text \Delta 11
Table
2 shows the entropy and compression ratios achieved for binary Huffman,
plain Huffman, tagged Huffman, Gnu Gzip and Unix Compress for the files of
the trec collection. As can be seen, the compression ratio degrades only slightly
by using bytes instead of bits and, in that case, we are still below Gzip. The
exception is the fr collection, which includes a large part of non-natural language
such as chemical formulas. The compression ratio of the tagged Huffman code is
approximately 3 points (i.e. 3% of u) over that of plain Huffman, which comes from
the extra space allocated for the tag bit in each byte.
Method Files
ap wsj doe ziff fr
Entropy 26.20 26.00 24.60 27.50 25.30
Binary Huffman 27.41 27.13 26.25 28.93 26.88
Plain Huffman 31.16 30.60 30.19 32.90 30.14
Tagged Huffman 34.12 33.70 32.74 36.08 33.53
Gzip 38.56 37.53 34.94 34.12 27.75
Compress 43.80 42.94 41.08 41.56 38.54
Table
2. Compression ratios achieved by different compression schemes, where "entropy" refers
to optimal coding. The space used to store the vocabulary is included in the Huffman compression
ratios.
3.3 Compression and Decompression Performance
Finally, we consider in this section the time taken to compress and decompress the
text.
To compress the text, a first pass is performed in order to collect the vocabulary
and its frequencies. By storing it in a trie data structure, O(u) total worst case
time can be achieved. Since a trie requires non practical amounts of memory, we
use a hash table to perform this step in our implementation. The average time to
collect the vocabulary using a hash table is O(u). The vocabulary is then sorted
by the word frequencies at O(v log v) cost, which in our case is O(u fi log
After the sorting, we generate a canonical Huffman code of the vocabulary words.
The advantage of using canonical trees is that they are space economic. A canonical
tree can be represented by using only two small tables with size O(log v). Further,
previous work has shown that decoding using canonical codes reduces decompression
times [Hirschberg and Lelewer 1990; Zobel and Moffat 1995; Turpin and Moffat
1997]. The canonical code construction can be done at O(v) cost, without using
any extra space by using the algorithm described in [Moffat and Katajainen 1995].
Finally, the file is compressed by generating the codeword of each text word, which
is again O(u).
Decompression starts by reading the vocabulary into memory at O(v) cost, as well
as the canonical Huffman tree at O(log v) cost. Then each word in the compressed
text is decoded and its output written on disk, for a total time of O(u).
Table
3 shows the compression and decompression times achieved for binary
Huffman, plain Huffman, tagged Huffman, Compress and Gzip for files of the trec
collection. In compression, we are 2-3 times faster than Gzip and only 17% slower
de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
than Compress (which achieves much worse compression ratios). In decompression,
there is a significant improvement when using bytes instead of bits. This is because
no bit shifts nor masking are necessary. Using bytes, we are more than 20% faster
than Gzip and three times faster than Compress.
Method
Compression Decompression
ap wsj doe ziff fr ap wsj doe ziff fr
Binary Huff. 490 526 360 518 440 170 185 121 174 151
Plain Huff. 487 520 356 515 435 106 117 81 112 96
Tagged Huff. 491 534 364 527 446 112 121 85 116 99
Compress 422 456 308 417 375 367 407 273 373 331
Gzip 1333 1526 970 1339 1048 147 161 105 139 111
Table
3. Compression and decompression times (in elapsed seconds for the whole collections)
achieved by different compression schemes.
The main disadvantage of word-based Huffman methods are the space requirements
to both compress and decompress the text. At compression time they need
the vocabulary and a look up table with the codewords that is used to speed up
the compression. The Huffman tree is constructed without any extra space by using
an in-place algorithm [Moffat and Katajainen 1995; Milidiu et al. 1998]. At
time we need to store the vocabulary in main memory. Therefore
the space complexities of our methods are O(u fi ). The methods used by Gzip and
Compress have constant space complexity and the amount of memory used can
be configured. So, our methods are more memory-demanding than Compress and
Gzip, which constitutes a drawback for some applications. For example, our methods
need 4.7 megabytes of memory to compress and 3.7 megabytes of memory to
decompress the wsj file, while Gzip and Compress need only about 1 megabyte
to either compress or decompress this same file. However, for the text searching
systems we are interested in, the advantages of our methods (i.e. allowing efficient
exact and approximate searching on the compressed text and fast decompression
of fragments) are more important than the space requirements.
4. SEARCHING ON TAGGED HUFFMAN COMPRESSED TEXT
Our first searching scheme works on tagged Huffman compressed texts. We recall
that the tagged Huffman compression uses one bit of each byte in the compressed
text to mark the beginning of each codeword.
General Huffman codes are prefix free codes, which means that no codeword is
a prefix of another codeword. This feature is sufficient to decode the compressed
text, but it is not sufficient to allow direct searching for compressed words, due to
the possibility of false matches. To see this problem, consider the word "ghost"
in the example presented in Figure 3. Although the word is not present on the
compressed text, its codeword is.
The false matches are avoided if in the compressed text no codeword prefix is
a suffix of another codeword. We add this feature to the tagged Huffman coding
scheme by setting to 1 the highest bit of the first byte of each codeword (this bit is
Fast and Flexible Word Searching on Compressed Text \Delta 13
.real word.
word
ghost Compressed Text
Original Text
Code
ghost ?
real
Word
Fig. 3. An example where the codeword of a word is present in the compressed text but the word
is not present in the original text. Codewords are shown in decimal notation.
the "tag"). Since a compressed pattern can now only match its first byte against the
first byte of a codeword in the text, we know that any possible match is correctly
aligned. This permits the use of any conventional text searching algorithm directly
on the compressed text, provided we search for whole words.
In general we are able to search phrase patterns. A phrase pattern is a sequence
of elements, where each element is either a simple word or an extended pattern.
Extended patterns, which are to be matched against a single text word, include the
ability to have any set of characters at each position, unbounded number of wild
cards, arbitrary regular expressions, approximate searching, and combinations. The
Appendix gives a detailed description of the patterns supported by our system.
The search for a pattern on a compressed text is made in two phases. In the
first phase we compress the pattern using the same structures used to compress the
text. In the second phase we search for the compressed pattern. In an exact pattern
search, the first phase generates a unique pattern that can be searched with any
conventional searching algorithm. In an approximate or extended pattern search,
the first phase generates all the possibilities of compressed codewords that match
with the original pattern in the vocabulary of the compressed text. In this last case
we use a multi-pattern algorithm to search the text. We now explain this method
in more detail and show how to extend it for phrases.
4.1 Preprocessing Phase
Compressing the pattern when we are performing an exact search is similar to
the coding phase of the Huffman compression. We search for each element of the
pattern in the Huffman vocabulary and generate the compressed codeword for it.
If there is an element in the pattern that is not in the vocabulary then there are
no occurrences of the pattern in the text.
If we are doing approximate or extended search then we need to generate compressed
codewords for all symbols in the Huffman vocabulary that match with the
element in the pattern. For each element in the pattern we make a list of the
compressed codewords of the vocabulary symbols that match with it. This is done
by sequentially traversing the vocabulary and collecting all the words that match
the pattern. This technique has been already used in block addressing indices on
uncompressed texts [Manber and Wu 1993; Ara'ujo et al. 1997; Baeza-Yates and
Navarro 1997]. Since the vocabulary is very small compared to the text size, the
sequential search time on the vocabulary is negligible, and there is no other additional
cost to allow complex queries. This is very difficult to achieve with online
plain text searching, since we take advantage of the knowledge of the vocabulary
stored as part of the Huffman tree.
14 \Delta E. S. de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
Depending on the pattern complexity we use two different algorithms to search
the vocabulary. For phrase patterns allowing k errors (k 0) that contain sets
of characters at any position we use the algorithm presented in [Baeza-Yates and
Navarro 1999]. If v is the size of the vocabulary and w is the length of a word W
the algorithm runs in O(v + w) time to search W . For more complicated patterns
allowing errors (k 0) that contain unions, wild cards or regular expressions we
use the algorithm presented in [Wu and Manber 1992], which runs in O(kv
time to search W . A simple word is searched in O(w) time using, e.g., a hash table.
4.2 Searching Phase
For exact search, after obtaining the compressed codeword (a sequence of bytes)
we can choose any known algorithm to process the search. In the experimental
results presented in this paper we used the Sunday [Sunday 1990] algorithm, from
the Boyer-Moore family, which has good practical performance. In the case of approximate
or extended searching we convert the problem to the exact multipattern
searching problem. We just obtain a set of codewords that match the pattern and
use a multipattern search algorithm proposed by Baeza-Yates and Navarro [Baeza-
Yates and Navarro 1999]. This algorithm is an extension of the Sunday algorithm,
and works well when the number of patterns to search is not very large. In case of a
large number of patterns to search, the best option would be Aho-Corasick [Aho and
Corasick 1975], which allows to search in O(n) time independently of the number
of patterns.
If we assume that the compressed codeword of a pattern of length m is c, then
Boyer-Moore type algorithms inspect about n=c bytes of the compressed text in
the best case. This best case is very close to the average case because the alphabet
is large (of size 128 or 256) and uniformly distributed, as compared to the small
pattern length c (typically 3 or 4). On the other hand, the best case in uncompressed
text searching is to inspect u=m characters. Since the compression ratio n=u should
roughly hold for the pattern on average, we have that n=u c=m and therefore
the number of inspected bytes in compressed and uncompressed text is roughly the
same.
There are, however, three reasons that make compressed search faster. First, the
number of bytes read from disk is n, which is smaller than u. Second, in compressed
search the best case is very close to the average case, while this is not true when
searching uncompressed text. Third, the argument that says that c=m is close
to n=u assumes that the search pattern is taken randomly from the text, while in
practice a model of selecting it randomly from the vocabulary matches reality much
better. This model yields a larger c value on average, which improves the search
time on compressed text.
Searching a phrase pattern is more complicated. A simple case arises when the
phrase is a sequence of simple words that is to be found as is (even with the same
separators). In this case we can concatenate the codewords of all the words and
separators of the phrase and search for the resulting (single) pattern.
If, on the other hand, we want to disregard the exact separators between phrase
elements or they are not simple words, we apply a different technique. In the
general case, the original pattern is represented by the sequence of lists
has the compressed codewords that match the i-th element of the original
Fast and Flexible Word Searching on Compressed Text \Delta 15
pattern. To start the search in the compressed text we choose one of these lists and
use the algorithm for one-word patterns to find the occurrences in the text. When
an occurrence of one element of the first list searched is found, we use the other
lists to verify if there is an occurrence of the entire pattern at this text position.
The choice of the first list searched is fundamental for the performance of the
algorithm. We heuristically choose the element i of the phrase that maximizes the
minimal length of the codewords in L i . This choice comes directly from the cost
to search a list of patterns. Longer codewords have less probability of occurrence
in the text, which translates into less verifications for occurrences of elements of
the other lists. Moreover, most text searching algorithms work faster on longer
patterns. This type of heuristic is also of common use in inverted files when solving
conjunctive queries [Baeza-Yates and Ribeiro-Neto 1999; Witten et al. 1999].
A particularly bad case for this filter arises when searching a long phrase formed
by very common words, such as "to be or not to be". The problem gets worse
if errors are allowed in the matches or we search for even less stringent patterns. A
general and uniform cost solution to all these types of searches is depicted in the
next section.
5. SEARCHING ON PLAIN HUFFMAN COMPRESSED TEXT
A disadvantage of our first searching scheme described before is the loss in compression
due to the extra bit used to allow direct searching. A second disadvantage is
that the filter may not be effective for some types of queries. We show now how to
search in the plain Huffman compressed text, a code that has no special marks and
gives a better compression ratio than the tagged Huffman scheme. We also show
that much more flexible searching can be carried out in an elegant and uniform
way.
We present two distinct searching algorithms. The first one, called plain filterless,
is an automaton-based algorithm that elegantly handles all possible complex cases
that may arise, albeit slower than the previous scheme. The second, called plain
filter, is a combination of both algorithms, trying to do direct pattern matching
on plain Huffman compressed text and using the automaton-based algorithm as a
verification engine for false matches.
5.1 The Automaton-Based Algorithm
As in the previous scheme, we make heavy use of the vocabulary of the text, which
is available as part of the Huffman coding data. The Huffman tree can be regarded
as a trie where the leaves are the words of the vocabulary and the path from the root
to a leaf spells out its compressed codeword, as shown in the left part of Figure 4
for the word "rose".
We first explain how to solve exact words and phrases and then extend the
idea for extended and approximate searching. The pattern preprocessing consists
on searching it in the vocabulary as before and marking the corresponding entry.
In general, however, the patterns are phrases. To preprocess phrase patterns we
simply perform this procedure for each word of the pattern. For each word of the
vocabulary we set up a bit mask that indicates which elements of the pattern does
the word match. Figure 4 shows the marks for the phrase pattern "rose is", where
01 indicates that the word "is" matches the second element in the pattern and 10
de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates47 131
Huffman tree Vocabulary
is
rose 10Marks Nondeterministic Searching Automaton
Fig. 4. The searching scheme for the pattern "rose is". In this example the word "rose" has a
three-byte codeword 47 131 8. In the nondeterministic finite automaton, '?' stands for 0 and 1.
indicates that the word "rose" matches the first element in the pattern (all the
other words have 00 since they match nowhere). If any word of the pattern is not
found in the vocabulary we immediately know that the pattern is not in the text.
Next, we scan the compressed text, byte by byte, and at the same time traverse
the Huffman tree downwards, as if we were decompressing the text 3 . A new symbol
occurs whenever we reach a leaf of the Huffman tree. At each word symbol obtained
we send the corresponding bit mask to a nondeterministic automaton, as illustrated
in
Figure
4. This automaton allows moving from state i to state i +1 whenever the
i-th word of the pattern is recognized. Notice that this automaton depends only
on the number of words in the phrase query. After reaching a leaf we return to the
root of the tree and proceed in the compressed text.
The automaton is simulated with the Shift-Or algorithm [Baeza-Yates and Gonnet
1992]. We perform one transition in the automaton for each text word. The Shift-
Or algorithm simulates efficiently the nondeterministic automaton using only two
operations per transition. In a 32-bit architecture it can search a phrase of up to
elements using a single computer word as the bit mask. For longer phrases we
use as many computer words as needed.
For complex patterns the preprocessing phase corresponds to a sequential search
in the vocabulary to mark all the words that match the pattern. To search the
symbols in the vocabulary we use the same algorithms described in Section 4.1.
The corresponding mask bits of each matched word in the vocabulary are set to
indicate its position in the pattern. Figure 5 illustrates this phase for the pattern
"ro# rose is" with allowing 1 error per word, where "ro#" means
any word starting with "ro"). For instance, the word "rose" in the vocabulary
matches the pattern at positions 1 and 2. The compressed text scanning phase does
not change.
The cost of the preprocessing phase is as in Section 4.1. The only difference is
that we mark bit masks instead of collecting matching words. The search phase
takes O(n) time.
Finally, we show how to deal with separators and stopwords. Most online search-
3 However, this is much faster than decompression because we do not generate the uncompressed
text.
Fast and Flexible Word Searching on Compressed Text \Delta 1747 131
Huffman tree Vocabulary
rose110100row
road
is
in
Marks Nondeterministic Searching Automaton
Fig. 5. General searching scheme for the phrase "ro# rose is" allowing 1 error. In the nondeterministic
finite automaton, '?' stands for 0 and 1.
ing algorithms cannot efficiently deal with the problem of matching a phrase disregarding
the separators among words (e.g. two spaces between words instead of
one). The same happens with the stopwords, which usually can be disregarded
when searching indexed text but are difficult to disregard in online searching. In
our compression scheme we know which elements of the vocabulary correspond in
fact to separators, and the user can define (at compression or even at search time)
which correspond to stopwords. We can therefore have marked the leaves of the
Huffman tree corresponding to separators and stopwords, so that the searching algorithm
can ignore them by not producing a symbol when arriving at such leaves.
Therefore, we disregard separators and stopwords from the sequence and from the
search pattern at negligible cost. Of course they cannot be just removed from the
sequence at compression time if we want to be able to recover the original text.
5.2 A Filtering Algorithm
We show in this section how the search on the plain Huffman compressed text is
improved upon the automaton-based algorithm described in the previous section.
The central idea is to search the compressed pattern directly in the text, as was
done with the tagged Huffman code scheme presented in Section 4.
Every time a match is found in the compressed text we must verify whether this
match indeed corresponds to a word. This is mandatory due to the possibility
of false matches, as illustrated in Figure 3 of Section 4. The verification process
consists of applying the automaton-based algorithm to the region where the possible
match was found. To avoid processing the text from the very beginning to make
this verification we divide the text in small blocks of the same size at compression
time. The codewords are aligned to the beginning of blocks, so that no codeword
crosses a block boundary. Therefore, we only need to run the basic algorithm from
the beginning of the block that contains the match.
The block size must be small enough so that the slower basic algorithm is used
only on small areas, and large enough so that the extra space lost at block boundaries
is not significant. We ran a number of experiments on the wsj file, arriving
to 256-byte blocks as a good time-space tradeoff.
The extension of the algorithm for complex queries and phrases follows the same
idea: search as in Section 4 and then use the automaton-based algorithm to check
de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
no errors
errors
Fig. 6. A nondeterministic automaton for approximate phrase searching (4 words, 2 errors) in the
compressed text. Dashed transitions flow without consuming any text input. The other vertical
and diagonal (unlabeled) transitions accept any bit mask. The ' ?' stands for 0 and 1.
the matches. In this case, however, we use multipattern searching, and the performance
may be degraded not only for the same reasons as in Section 4, but also
because of the possibility of verifying too many text blocks. If the number of matching
words in the vocabulary is too large, the efficiency of the filter may be degraded,
and the use of the scheme with no filter might be preferable.
5.3 Even More Flexible Pattern Matching
The Shift-Or algorithm can do much more than just searching for a simple sequence
of elements. For instance, it has been enhanced to search for regular expressions,
to allow errors in the matches and other flexible patterns [Wu and Manber 1992;
Baeza-Yates and Navarro 1999]. This powerful type of search is the basis of the
software Agrep [Wu and Manber 1992].
A new handful of choices appear when we use these abilities in our word-based
compressed text scenario. Consider the automaton of Figure 6. It can search in the
compressed text for a phrase of four words allowing up to two insertions, deletions
or replacements of words. Apart from the well known horizontal transitions that
match words, there are vertical transitions that insert new words in the pattern,
diagonal transitions that replace words, and dashed diagonal transitions that delete
words from the pattern.
This automaton can be efficiently simulated using extensions of the Shift-Or algorithm
to search in the compressed text for approximate occurrences of the phrase.
For instance, the search of "identifying potentially relevant matches" could
find the occurrence of "identifying a number of relevant matches" in the
text with one replacement error, assuming that the stop words "a" and "of" are
disregarded as explained before. Moreover, if we allow three errors at the character
level as well we could find the occurrence of "who identified a number of
relevant matches" in the text, since for the algorithm there is an occurrence of
"identifying" in "identified". Other efficiently implementable setups can be
insensitive to the order of the words in the phrase. The same phrase query could be
Fast and Flexible Word Searching on Compressed Text \Delta 19
found in "matches considered potentially relevant were identified" with
one deletion error for "considered". Finally, proximity searching is of interest in
IR and can be efficiently solved. The goal is to give a phrase and find its words relatively
close to each other in the text. This would permit to find out the occurrence
of "identifying and tagging potentially relevant matches" in the text.
Approximate searching has traditionally operated at the character level, where it
aims at recovering the correct syntax from typing or spelling mistakes, errors coming
from optical character recognition software, misspelling of foreign names, and so
on. Approximate searching at the word level, on the other hand, aims at recovering
the correct semantics from concepts that are written with a different wording. This
is quite usual in most languages and is a common factor that prevents finding the
relevant documents.
This kind of search is very difficult for a sequential algorithm. Some indexed
schemes permit proximity searching by operating on the list of exact word positions,
but this is all. In the scheme described above, this is simple to program, elegant and
extremely efficient (more than on characters). This is an exclusive feature of this
compression method that opens new possibilities aimed at recovering the intended
semantics, rather than the syntax, of the query. Such capability may improve the
retrieval effectiveness of IR systems.
6. SEARCHING PERFORMANCE
The performance evaluation of the three algorithms presented in previous sections
was obtained by considering 40 randomly chosen patterns containing 1 word, 40
containing 2 words, and 40 containing 3 words. The same patterns were used by
the three search algorithms. All experiments were run on the wsj text file and the
results were obtained with a 99% confidence interval. The size of the uncompressed
wsj is 262.8 megabytes, while its compressed versions are 80.4 megabytes with the
plain Huffman method and 88.6 megabytes with tagged Huffman.
Table
4 presents exact searching times
using Agrep [Wu and Manber 1992], tagged (direct search on tagged Huffman),
plain filterless (the basic algorithm on plain Huffman), and plain filter (the filter
on plain Huffman, with Sunday filtering for blocks of 256 bytes). It can be seen
from this table that our three algorithms are almost insensitive to the number of
errors allowed in the pattern while Agrep is not. The plain filterless algorithm
is really insensitive because it maps all the queries to the same automaton that
does not depend on k. The filters start taking about 2=3 of the filterless version,
and become closer to it as k grows. The experiments also shows that both tagged
and plain filter are faster than Agrep, almost twice as fast for exact searching and
nearly 8 times faster for approximate searching. For all times presented, there is
a constant I/O time factor of approximately 8 seconds for our algorithms to read
the wsj compressed file and approximately 20 seconds for Agrep to read the wsj
uncompressed file. These times are already included on all tables.
The following test was for more complex patterns. This time we experimented
with specific patterns instead of selecting a number of them at random. The reason
is that there is no established model for what is a "random" complex pattern.
Instead, we focused on showing the effect of different pattern features, as follows:
de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
Algorithm
Agrep 23.8 \Sigma 0.38 117.9 \Sigma 0.14 146.1 \Sigma 0.13 174.6 \Sigma 0.16
tagged 14.1 \Sigma 0.18 15.0 \Sigma 0.33 17.0 \Sigma 0.71 22.7 \Sigma 2.23
plain filterless 22.1 \Sigma 0.09 23.1 \Sigma 0.14 24.7 \Sigma 0.21 25.0 \Sigma 0.49
plain filter 15.1 \Sigma 0.30 16.2 \Sigma 0.52 19.4 \Sigma 1.21 23.4 \Sigma 1.79
Table
4. Searching times (in elapsed seconds) for the wsj text file using different search techniques
and different number of errors k. Simple random patterns were searched.
(1) prob# (where # means any character considered zero or more times, one possible
answer being "problematic"): an example of pattern that matches with lot of
words on the vocabulary;
(2) local television stations, a phrase pattern composed of common words;
(3) hydraulic forging, a phrase pattern composed of uncommon words;
(4) Bra[sz]il# and Ecua#, a phrase pattern composed of a complex expression.
Table
4 presents exact searching
times for the patterns presented above.
Algorithm
Agrep 74.3 117.7 146.0 23.0 117.6 145.1
tagged 18.4 20.6 21.1 16.5 19.0 26.0
plain filterless 22.8 23.5 23.6 21.1 23.3 25.5
plain filter 21.4 21.4 22.1 15.2 17.1 22.3
Algorithm Pattern 3 Pattern 4
Agrep 21.9 117.1 145.1 74.3 117.6 145.8
tagged 14.5 15.0 16.0 18.2 18.3 18.7
plain filterless 21.7 21.5 21.6 24.2 24.2 24.6
plain filter 15.0 15.7 16.5 17.6 17.6 18.0
Table
5. Searching times (in elapsed seconds) for the wsj text file using different search techniques
and different number of errors k.
Note that, in any case, the results on complex patterns do not differ much from
those for simple patterns. Agrep, on the other hand, takes much more time on
complex patterns such as pattern (1) and pattern (4).
7. CONCLUSIONS AND FUTURE WORK
In this paper we investigated a fast compression and decompression scheme for natural
language texts and also presented algorithms which allow efficient search for
exact and extended word and phrase patterns. We showed that we achieve about
30% compression ratio, against 40% and 35% for Compress and Gzip, respectively.
Fast and Flexible Word Searching on Compressed Text \Delta 21
For typical texts, compression times are close to the times of Compress and approximately
half the times of Gzip, and decompression times are lower than those
of Gzip and one third of those of Compress.
Search times are better on the compressed text than on the original text (about
twice as fast). Moreover, a lot of flexibility is provided in the search patterns.
Complex patterns are searched much faster than on uncompressed text (8 times
faster is typical) by making heavy use of the vocabulary information kept by the
compressor.
The algorithms presented in this paper have been implemented in a software
system called Cgrep, which is publicly available. An example of the power of Cgrep is
the search of a pattern containing 3 words and allowing 1 error, in a compressed file
of approximately 80.4 megabytes (corresponding to the wsj file of 262.8 megabytes).
Cgrep runs at 5.4 megabytes per second, which is equivalent to searching the original
text at 17.5 megabytes per second. As Agrep searches the original text at 2.25
megabytes per second, Cgrep is 7.8 times faster than Agrep.
These results are so good that they encourage keeping the text compressed all
the time. That is, all the textual documents of a user or a database can be kept
permanently compressed as a single text collection. Searching of interesting documents
can be done without decompressing the collection, and fast decompression
of relevant files for presentation purposes can be done efficiently. To complete this
picture and convert it into a viable alternative, a mechanism to update a compressed
text collection must be provided, so documents can be added, removed and altered
efficiently. Some techniques have been studied in [Moura 1999], where it is shown
that efficient updating of compressed text is possible and viable.
Finally, we remark that sequential searching is not a viable solution when the text
collections are very large, in which case indexed schemes have to be considered. Our
technique is not only useful to speed up sequential search. In fact, it can be used
with any indexed scheme. Retrieved text is usually scanned to find the byte position
of indexed terms and our algorithms will be of value for this task [Witten et al.
1999]. In particular, it can also be used to improve indexed schemes that combine
inverted files and sequential search, like Glimpse [Manber and Wu 1993]. Glimpse
divides the text space into logical blocks and builds an inverted file where each list
of word occurrences points to the corresponding blocks. Searching is done by first
searching in the vocabulary of the inverted file and then sequentially searching in
all the selected blocks. By using blocks, indices of only 2%-4% of space overhead
can significantly speed up the search. We have combined our compression scheme
with block addressing inverted files, obtaining much better results than those that
work on uncompressed text [Navarro et al. 2000].
ACKNOWLEDGMENTS
We wish to acknowledge the many fruitful discussions with Marcio D. Ara'ujo,
who helped particularly with the algorithms for approximate searching in the text
vocabulary. We also thank the many comments of the referees that helped us to
improve this work.
22 \Delta E. S. de Moura and G. Navarro and N. Ziviani and R. Baeza-Yates
A. COMPLEX PATTERNS
We present the types of phrase patterns supported by our system. For each word
of a pattern it allows to have not only single letters in the pattern, but any set of
letters or digits (called just "characters" here) at each position, exactly or allowing
errors, as follows:
-range of characters (e.g. t[a-z]xt, where [a-z] means any letter between a and
-arbitrary sets of characters (e.g. t[aei]xt meaning the words taxt, text and
-complements (e.g. t[ab]xt, where ab means any single character except a or
b; t[a-d]xt, where a-d means any single character except a, b, c or d);
-arbitrary characters (e.g. t\Deltaxt means any character as the second character of
the word);
-case insensitive patterns (e.g. Text and text are considered as the same words).
In addition to single strings of arbitrary size and classes of characters described
above the system supports patterns combining exact matching of some of their
parts and approximate matching of other parts, unbounded number of wild cards,
arbitrary regular expressions, and combinations, exactly or allowing errors, as follows
-unions (e.g. t(e-ai)xt means the words text and taixt; t(e-ai)*xt means
the words beginning with t followed by e or ai zero or more times followed by
xt). In this case the word is seen as a regular expression;
-arbitrary number of repetitions (e.g. t(ab)*xt means that ab will be considered
zero or more times). In this case the word is seen as a regular expression;
-arbitrary number of characters in the middle of the pattern (e.g. t#xt, where #
means any character considered zero or more times). In this case the word is not
considered as a regular expression for efficiency. Note that # is equivalent to \Delta
(e.g. t#xt and t\Delta*xt obtain the same matchings but the latter is considered as
a regular expression);
-combining exact matching of some of their parts and approximate matching of
other parts (!te?xt, with exact occurrence of te followed by any
occurrence of xt with 1 error);
-matching with nonuniform costs (e.g. the cost of insertions can be defined to be
twice the cost of deletions).
We emphasize that the system performs whole-word matching only. That is, the
pattern is a sequence of words or complex expressions that are to be matched against
whole text words. It is not possible to write a single regular expression that returns
a phrase. Also, the extension described in Section 5.3 is not yet implemented.
--R
Efficient string matching: an aid to bibliographic search.
Communications of the ACM
Second IEEE Data Compression Conference (March
Let sleeping files lie: pattern matching in z-compressed files
Large text searching allowing errors.
A new approach to text searching.
Block addressing indices for approximate text retrieval.
Faster approximate string matching.
Modern Information Retrieval.
Data compression in full-text retrieval systems
A locally adaptive data compression scheme.
String matching in lempel-ziv compressed strings
Handbook of Algorithms and Data Structures.
Overview of the third text retrieval conference.
Information Retrieval - Computational and Theoretical Aspects
Efficient decoding of prefix codes.
Constructing word-based text compression algorithms
A method for the construction of minimum-redundancy codes
A unifying framework for compressed pattern matching.
Multiple pattern matching in lzw compressed text.
A text compression scheme that allows fast searching directly in the compressed file.
Glimpse: a tool to search through entire file systems.
Technical Report 93-34 (October)
Aplicac~oes de Compress~ao de Dados a Sistemas de Recuperac~ao de Informac~ao.
Indexing compressed text.
Direct pattern matching on compressed text.
Fast searching on compressed text allowing errors.
Adding compression to block addressing inverted indices.
A general practical approach to pattern matching over Ziv-Lempel compressed text
Generating a canonical prefix encoding.
A very fast substring search algorithm.
Fast file search using text compression.
Managing Gigabytes (second
Fast text searching allowing errors.
Human Behaviour and the Principle of Least Effort.
On the complexity of finite sequences.
A universal algorithm for sequential data compression.
Compression of individual sequences via variable-rate coding
IEEE Transactions on Information Theory
Adding compression to a full-text retrieval system
--TR
A locally adaptive data compression scheme
Word-based text compression
Efficient decoding of prefix codes
Text compression
A very fast substring search algorithm
Handbook of algorithms and data structures
A new approach to text searching
Fast text searching
Data compression in full-text retrieval systems
Adding compression to a full-text retrieval system
String matching in Lempel-Ziv compressed strings
Let sleeping files lie
A text compression scheme that allows fast searching directly in the compressed file
Block addressing indices for approximate text retrieval
Fast searching on compressed text allowing errors
Efficient string matching
Generating a canonical prefix encoding
Information Retrieval
Modern Information Retrieval
Adding Compression to Block Addressing Inverted Indexes
In-Place Calculation of Minimum-Redundancy Codes
Shift-And Approach to Pattern Matching in LZW Compressed Text
A General Practical Approach to Pattern Matching over Ziv-Lempel Compressed Text
A Unifying Framework for Compressed Pattern Matching
Multiple Pattern Matching in LZW Compressed Text
--CTR
Falk Scholer , Hugh E. Williams , John Yiannis , Justin Zobel, Compression of inverted indexes For fast query evaluation, Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, August 11-15, 2002, Tampere, Finland
Robert P. Cook, Heuristic compression of an English word list: Research Articles, SoftwarePractice & Experience, v.35 n.6, p.577-581, May 2005
Dana Shapira , Ajay Daptardar, Adapting the Knuth-Morris-Pratt algorithm for pattern matching in Huffman encoded texts, Information Processing and Management: an International Journal, v.42 n.2, p.429-439, March 2006
Nivio Ziviani , Edleno Silva de Moura , Gonzalo Navarro , Ricardo Baeza-Yates, Compression: A Key for Next-Generation Text Retrieval Systems, Computer, v.33 n.11, p.37-44, November 2000
Shmuel T. Klein , Dana Shapira, Pattern matching in Huffman encoded texts, Information Processing and Management: an International Journal, v.41 n.4, p.829-841, July 2005
R. Yugo Kartono Isal , Alistair Moffat , Alwin C. H. Ngai, Enhanced word-based block-sorting text compression, Australian Computer Science Communications, v.24 n.1, p.129-137, January-February 2002
Vo Ngoc Anh , Alistair Moffat, Inverted Index Compression Using Word-Aligned Binary Codes, Information Retrieval, v.8 n.1, p.151-166, January 2005
Edleno S. de Moura , Clia F. dos Santos , Daniel R. Fernandes , Altigran S. Silva , Pavel Calado , Mario A. Nascimento, Improving Web search efficiency via a locality based static pruning method, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Alistair Moffat , R. Yugo Kartono Isal, Word-based text compression using the Burrows-Wheeler transform, Information Processing and Management: an International Journal, v.41 n.5, p.1175-1192, September 2005
Nieves R. Brisaboa , Antonio Faria , Gonzalo Navarro , Jos R. Param, Efficiently decodable and searchable natural language adaptive compression, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Kimmo Fredriksson , Szymon Grabowski, A general compression algorithm that supports fast searching, Information Processing Letters, v.100 n.6, p.226-232, 31 December 2006
Kimmo Fredriksson, On-line Approximate String Matching in Natural Language, Fundamenta Informaticae, v.72 n.4, p.453-466, December 2006
Kimmo Fredriksson , Jorma Tarhio, Efficient String Matching in Huffman Compressed Texts, Fundamenta Informaticae, v.63 n.1, p.1-16, January 2004
Gonzalo Navarro , Jorma Tarhio, LZgrep: a BoyerMoore string matching tool for ZivLempel compressed text: Research Articles, SoftwarePractice & Experience, v.35 n.12, p.1107-1130, October 2005
Gonzalo Navarro , Nieves Brisaboa, New bounds on D-ary optimal codes, Information Processing Letters, v.96 n.5, p.178-184, December 2005
Gonzalo Navarro, Regular expression searching on compressed text, Journal of Discrete Algorithms, v.1 n.5-6, p.423-443, October
Juha Krkkinen , Gonzalo Navarro , Esko Ukkonen, Approximate string matching on Ziv-Lempel compressed text, Journal of Discrete Algorithms, v.1 n.3-4, p.313-338, June
Joaqun Adiego , Gonzalo Navarro , Pablo de la Fuente, Using structural contexts to compress semistructured text collections, Information Processing and Management: an International Journal, v.43 n.3, p.769-790, May, 2007
P. Ferragina , F. Luccio , G. Manzini , S. Muthukrishnan, Compressing and searching XML data via two zips, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Adam Cannane , Hugh E. Williams, A general-purpose compression scheme for large collections, ACM Transactions on Information Systems (TOIS), v.20 n.3, p.329-355, July 2002
Andrei Arion , Angela Bonifati , Ioana Manolescu , Andrea Pugliese, XQueC: A query-conscious compressed XML database, ACM Transactions on Internet Technology (TOIT), v.7 n.2, p.10-es, May 2007
Marcos Andr Gonalves , Edward A. Fox , Layne T. Watson , Neill A. Kipp, Streams, structures, spaces, scenarios, societies (5s): A formal model for digital libraries, ACM Transactions on Information Systems (TOIS), v.22 n.2, p.270-312, April 2004 | natural language text compression;word-based Huffman coding;compressed pattern matching;word searching |
348857 | Ontological Approach for Information Discovery in Internet Databases. | The Internet has solved the age-old problem of network connectivity and thus enabling the potential access to, and data sharing among large numbers of databases. However, enabling users to discover useful information requires an adequate metadata infrastructure that must scale with the diversity and dynamism of both users' interests and Internet accessible databases. In this paper, we present a model that partitions the information space into a distributed, highly specialized domain ontologies. We also introduce inter-ontology relationships to cater for user-based interests across ontologies defined over Internet databases. We also describe an architecture that implements these two fundamental constructs over Internet databases. The aim of the proposed model and architecture is to eventually facilitate data discovery and sharing for Internet databases. | Introduction
The emergence of the Internet [?] and the World Wide Web (WWW) [?] have been
among the most important developments in the computer industry. Particularly,
the Web has brought a wave of new users and service providers to the Internet. It
is now the most popular distributed information repository. This globalization has
also spurred the development of tools and aids to navigate and share information
in corporate intranets that previously were only accessible on-line at prohibitive
costs.
Organizations all over the world rely on a wide variety of databases to conduct
their everyday business. Databases are usually designed from scratch if none is
found to meet their requirements. This has led to a proliferation of databases
obeying different sets of requirements modeling the same situations. In many in-
stances, and because of a lack of any organized conglomeration of databases, users
create their own pieces of information that may exist in current databases.
Though it may be known where a certain piece of information is stored, locating it
may be prohibitive in terms of cost and effort. There was also a renewed interest in
* This work was done when the first and third authors were at Queensland University of Technology
sharing information across heterogeneous platforms because of the readily available
and relatively cheap network connectivity. Although one may potentially access all
participating databases, in reality this is an almost intractable task due to various
fundamental problems [?, ?]. The challenge is to give a user the sense that he or
she is accessing a single database that contains almost everything he or she needs.
To allow effective and efficient data sharing on the Web, there is a need for an
infrastructure that can support flexible tools for information space organization,
communication facilities, information discovery, content description, and assembly
of data from heterogeneous sources (conversion of data, reconciliation of incompatible
syntax and semantics, integration of distributed information, etc). Old
techniques for manipulating these sources are not appropriate and efficient. Users
must be provided with tools for the logical scalable exploration of such systems
in a three step process involving: (i) Location of appropriate information sources;
(ii) Searching of these sources for relevant information items; (iii) Understanding of
the structure, terminology and patterns of use of these information items for data
integration, and ultimately, querying.
An approach for achieving interoperation in the context of large and dynamic environments
was to use a common global ontology shared by users and information
sources [?]. This ontology can capture the structure and semantics of the information
space. This can be achieved, for example, using the Entity-Relationship or
Frame-based models [?]. In general, in existing work, the global ontology acts as a
global conceptual schema that is used to formulate queries as if the user has to deal
with one single database schema. It is however difficult to create and maintain such
common global ontology because of the autonomy and heterogeneity aspects of the
underlying repositories. In the case of a large numbers of autonomous databases, a
meaningful organization and segmentation of databases, based on simple ontologies
that describe coherent slices of the information space, need to be introduced. These
would filter interactions, accelerate information search, and allow for the sharing of
data in a tractable manner.
In order to address problems of information discovery and data sharing for Internet
databases, the WebFINDIT prototype has been developed [?] [?]. We base
our design on previous work on the FINDIT project [?, ?], an information brokering
system that addresses issues of interoperability in very large multidatabases.
The fundamental premise is that in a dynamic environment such as the Web, users
would have to be incrementally made aware of what is available in terms of both
information and information repositories.
In our approach, ontologies of information repositories are established through a
simple domain ontology. This meta-information represents the domain of interest of
the underlying information repositories. For example, collection of databases that
store information about the same topic are grouped together. Individual databases
join and leave the formed ontologies at their own discretion. Ontology formation
and maintenance, as well as exploration of the relationship structure, occurs via a
special-purpose language called WebTassili.
The WebFINDIT prototype provides a scalable and portable architecture using
the latest in distributed object and Web technologies, including CORBA as a
distributed computing platform, Java, and connectivity gateways to access native
information sources. CORBA provides support for communication between software
components of a distributed environment, dynamic location and integration
of information sources while maintaining their autonomy. Java allows our system
to be deployed dynamically over the Web and provides users with sophisticated,
system-independent, and interactive interface.
The remainder of this paper is organized as follows. Related work is discussed
in Section 2. In Section 3, we present the WebFINDIT's approach for information
space organization and modeling. In Sections 4 and 5 respectively, we overview the
metadata and language support for WebFINDIT. Details of the implementation
are given in Section 6 along with a scenario describing the use of WebFINDIT in a
healthcare application. We provide some concluding remarks in Section 7.
2. Background
We describe here some projects that make use of ontologies or domain models for
data sharing in the context of heterogeneous information sources.
Multidatabases (e.g., UniSQL [?] and Pegasus [?]) have traditionally investigated
static approaches to sharing data among small numbers of component databases.
This has involved finding solutions to data heterogeneity and facets of autonomy.
These solutions usually rely on centralized database administrators to document
database semantics or to develop translators that hide differences in query languages
and database structures. However, in the context of intranets and Internet
environments, users should have a way to locate information in large spaces of
information. Also, users have a need to be educated about the information of in-
terest. Any static solution to such a problem is bound to fail as the information
space in environments like the Internet has a staggeringly rapid evolution [?]. In
multidatabase systems, the emphasis has been more on conflict resolution among
different schemas and data models in small networks of heterogeneous databases.
Multiple ontologies and scalability issues are usually not explicitly addressed.
In most information retrieval systems, the emphasis is usually on how to build
indexing schemes to efficiently access information given some hints about the resource
[?]. A similar approach to information retrieval systems is taken in Internet
information gathering systems (e.g., GlOSS [?]), WWW search tools [?] (e.g., Lycos
and MetaCrawler), and database-like languages for the WWW (e.g., W3QL [?]
and WebSQL [?]). In general, in this area the emphasis is on the improvement of
indexing techniques. Issues like the information space organization, terminological
problems, and semantic support for users requests are not addressed. In addition,
text search techniques are inadequate in the context of structured data (e.g., no
support for complex queries that involve operations like join).
The Carnot project [?] addresses the integration of distributed and heterogeneous
information sources (e.g., database systems, expert systems, business work-
flows, etc). It uses a knowledge base, called Cyc, to store information about the
global schema. The InfoSleuth project [?] is the successor of Carnot and presents an
approach for information retrieval and processing in a dynamic Web-based environ-
4ment. It integrates agent technology, domain ontologies, and information brokering
to handle the interoperation of data and services over information networks. Different
types of agents are proposed to represent users, information sources and the
system itself. These agents communicate with each other by using the Knowledge
Query and Manipulation Language (KQML). Users specify queries over specified
ontologies via an applet-based user interface. Although this system provides an
architecture that deals with scalable information networks, it does not provide facilities
for user education and information space organization. InfoSleuth supports
the use of several domain ontologies, however, inter-ontology relationships are not
considered. Also, this system overburdens the network resources by transmitting all
requests to a central server that provides an overall knowledge of system ontologies.
Information Manifold (IM) [?] is a system that provides uniform access to collections
of heterogeneous information sources on the Web. This system was the
first that used a mechanism to describe declaratively the contents and query capabilities
of information sources. Sources descriptions are used to efficiently prune
the set of information sources for a given query and to generate executable query
plans. The main components of IM are the domain model, the plan generator, and
the execution engine. The domain model is the knowledge base that describes the
browsable information space including the vocabulary of a domain, the contents of
information sources and the capability of querying. The plan generator is used to
compute an executable query plan based on the descriptions of information sources.
The domain model constitutes the global ontology shared by users and information
sources. Such ontologies are difficult to create and maintain due to the variety and
characteristics of the underlying Web repositories.
The DIOM system architecture [?] achieves interoperability in heterogeneous
information systems by matching the consumer's query profile and the information
producer's source profiles. Both are described in terms of the DIOM interface
definition language (DIOM IDL). The consumer's query profile captures the querying
interests of the consumer and the preferred query result representation. The
producer's source profiles describe the content and query capabilities of individual
information sources. A wrapper controls and facilitates external access to the
wrapped source by using local metadata. The focus on DIOM is more on pruning
irrelevant sources when resolving a query. A similar approach to Information Manifold
is used to describe information sources. The use of multiple ontologies is not
considered.
OBSERVER [?] [?] is a multi-purpose architecture for information brokering.
One of the major issues addressed is the vocabulary differences across the components
systems. OBSERVER features the use of pre-existing domain specific ontologies
(ontology server) to describe the terms used for the construction of domain
specific metadata. Relationships across terms in different ontologies are supported.
In addition, OBSERVER performs brokering at the metadata and vocabulary levels.
OBSERVER does not provide a straightforward approach for information brokering
in defining mappings from the ontologies to the underlying information sources. It
should be noted that OBSERVER does not provide facilities to help or train users
during query processing.
The SIMS project aims to achieve the integration of multiple information sources
(databases or knowledge bases) [?]. The integration is based on the Loom knowledge
representation language. The architecture is similar to the tightly coupled federated
databases. SIMS transforms information sources of any data model into SIMS
domain model which is a declarative knowledge base. Thus, SIMS domain model
is equivalent to the common data model in the federated databases. The main
contribution of SIMS is on query processing over one single ontology [?]. Issues
related to the use of multiple ontologies and their relationships are not considered.
The COntext INterchange (COIN) project [?] aims at providing intelligent semantic
integration among heterogeneous sources such as relational databases, Web
documents and receivers. The proposed approach is based on the unambiguous description
of the assumptions made at each component (how information should be
interpreted). The assumptions pertaining to a source or receiver form its context.
COIN architecture is context mediator based. The context mediator is responsible
for the detection of semantic conflicts between the contexts of the information
sources and receivers and the conversion needed to resolve them. A user query
is reformulated into subqueries that can be forwarded to appropriate information
sources for execution. The results obtained from the component systems are combined
and converted to the context of the receiver that initiated the query. COIN
uses shared ontologies (conceptualization of the underlying domains) as the basis
for context comparisons and interoperation. The issues of information discovery
and information space organization are not considered.
There are major differences between WebFINDIT and systems described above.
WebFINDIT aims to provide a single interface to access all Web-accessible databases.
This interface is based on an architecture that organizes the information space into
simple distributed ontologies and provides relationships among them. Users are incrementally
educated about the available information space in order to allow them
to query the underlying databases. One of the greatest strengths of our approach
is extensibility. Compared to the approaches that use ontologies for information
brokering, the addition of new information is simpler in WebFINDIT. While the
mappings between sources and domain models require great efforts in these ap-
proaches, our underlying distributed ontology design principles make it easier to
construct and manage ontologies. In WebFINDIT, an information source can be
associated with many classes in different clusters via instantiation relationships as
we use object oriented stratification of clusters. More importantly, we provide a
seamless Internet-based implementation of the WebFINDIT infrastructure.
3. Information Space Organization and Modeling
The WebFINDIT approach is mainly motivated by the fact that in a highly dynamic
and constantly growing network of databases accessible through the Web,
there is a need for a meaningful organization and segmentation of the information
space. We adopt an ontology-based organization of the diverse databases to filter
interactions, accelerate information searches, and allow for the sharing of data in a
tractable manner. Key criteria that have guided our approach are: scalability, design
simplicity, and easy to use structuring mechanisms based on object-orientation.
The information space is organized through distributed domain ontologies. Information
sources join and leave a given ontology based on their domain of interest
which represent some portion of the information space. For example, information
sources that share the topic Medical Research are linked to the same ontology.
This topic-based ontology provides the terminology for formulating queries involving
a specific area of interest. Such organization aims to reduce the overhead of
locating and querying information in large networks of databases. As an information
source may contain information related to more than one domain of interest,
it may be linked to more than one ontology at the same time.
The different ontology formed on the above principle are not isolated entities but
they can be related to each other by inter-ontology relationships. These relationships
are created based on the users' needs. They allow a user query to be resolved
by information sources in remote ontologies when it cannot be resolved locally.
We do not intend to achieve an automatic "reconciliation" between heterogeneous
ontologies. In our system, users are incrementally educated about the available
information space by browsing the local ontology and by following the inter-ontology
relationships. In this way, they have sufficient information to query actual data.
3.1. Domain Models
Each ontology is specialized to a single common area of interest. It provides domain
specific information and terms for interacting within the ontology and its underlying
databases. That is providing an abstraction of a specific domain. This abstraction
is intended to be used by users and other ontologies as a description of the specific
domain. Ontologies dynamically clump databases together based on common areas
of interest into a single atomic unit. This generates a conceptual space which has
a specific content and a scope. The formation, dissolution and modification of an
ontology is a semi-automatic process. Privileged users (e.g., the database admin-
are provided with tools to maintain the different ontologies mainly on a
negotiation basis.
Instead of considering a simple membership of information sources to an ontology,
intra-ontology relationships between these sources are considered. This allows a
more flexible and precise querying within an ontology. These relationships form a
hierarchy of classes (an information type based classification hierarchy) inside an
ontology. In that respect, users can refine their queries by browsing the different
classes of an ontology.
3.2. Inter-ontology Relationships
When a user submits a query to the local ontology, it might be not resolvable locally.
In this case, the system try to find remote ontologies that can eventually resolve
the query. In order to allow such query "migration", inter-ontology relationships
are dynamically established between two ontologies based on users' needs. Inter-ontology
relationships can be viewed as a simplified way to share information with
low overhead. The amount of sharing in an inter-ontology relationship will typically
involve a minimum amount of information exchange.
Although the above relationships involve basically only ontologies, they are extended
to databases as well. This allows more flexibility in the organization and
querying of the information space. Inter-ontology relationships are of three types
(see
Figure
??). The first type involves a relationship between two ontologies to exchange
information. The second type involves a relationship between two databases.
The third type involves a relationship between an ontology and a database. An
inter-ontology relationship between two ontologies involves providing a general description
of the information that is to be shared. Likewise, an inter-ontology relationship
between two databases also involves providing a general description of
information that databases would like to share. The third alternative is a relationship
between an ontology and a database. In this case, the database (or the
ontology) provides a general description of the information it is willing to share
with the ontology (or database). The difference between these three alternatives
lies in the way queries are resolved. In the first and third alternative (when the
information provider is an ontology), the providing ontology takes over to further
resolve the query. In the second case, however, the user is responsible for contacting
the providing database in order to gain knowledge about the information.
Our dynamic distributed ontologies make information accessing more tractable
by limiting the number of databases which must interact. Databases join and
leave ontologies and inter-ontology relationships based upon local requirements and
constraints. At any given point in time a single database may partake in several
ontologies and inter-ontology relationships.
We believe that a complete reconciliation between all the information sources
accessible through the Web is not a tractable problem. In our approach, there is
no automatic translation between different ontologies. The users are incrementally
educated about the available information space. They discover and become familiar
with the information sources that are effectively relevant. They can submit precise
queries which guarantee that only relevant answers are returned. On the other
hand, information sources join simply our distributed ontologies by providing some
local information and choosing one or more ontologies that meet their interests.
In addition, this join does not involve major modifications in the overall system -
we only need to made some changes at the metadata level related to the involved
ontologies. This allows our system to scale easily and to be queried in a simple and
flexible way.
3.3. Information Sources Modeling
When a database decides to join WebFINDIT, it has to define which areas are of
interest for it. Links are then established to ontologies implementing these concepts
if any, otherwise a negotiation may be engaged with other databases to form new
ontologies. The database administrator must provide an object-oriented view of the
CentreLink
ATO
Prince Charles
Ambulance
ATO_to_Medicare
QUT
Qld Cancer Fund
RMIT
RBH
MBF
Medibank
RBH Workers Union
Research Ontology
State Govt. Funding
Medical Workers
Union Ontology
CentreLink_to_Medical
Medicare
SGF_to_Medicare
WorkersUnion_to_Medical
Medical Ontology
Medical Insurance Ontology
Superannuation Ontology
SGF_to_Medical
Medical_to_MedicalInsurance
Ambulance_to_Medical
Super_to_Medical
ATO_to_Medical
Figure
1. Distributed Ontologies in the Medical World.
underlying database. This view contains the terms of interest available from that
database. These terms provide the interface that can be used to communicate with
the database. More specifically, this view consists of one or several types (called
access interface of a database) containing the exported operations (attributes and
functions) and a textual description of these operations. The membership of a
database to an ontology is materialized by the fact that the database is an instance
of one or many classes in the same or different ontologies. We should also note that
other useful information are provided by the database administrator (see Section
??).
To illustrate how a database is modeled in WebTassili and how it is related to the
domain model, consider the Queensland Cancer Fund database which is member
of the ontology Research (see Figure ??). Let us assume that this database is an
instance of a class in the ontology Research. It represents, for example, an mSQL
database that contains the following relations:
CancerClassify(Cancer Id, Scientific Name, Common Name,
Infection Area, Cause Known, Hereditary,
ResearchGroup(Group Id, Cancer Id, Start Date, Supervisor Id)
Staff(Staff Id, Title, Name, Location, Phone, Research Field)
GroupOwnership(Ownership Id, Group Id, Staff Id,
Date Commenced, Date Completed)
Funding(Funding Id, Group Id, Provider Name, Amount,
Conditions)
If the database administrator decides to make public some information related to
some of the above relations, they have to be advertised by specifying the information
type to be published as follows:
Type Funding f
attribute string CancerClassify.CommonName;
function real Amount(string CancerClassify.CommonName);
Type Results f
attribute string Staff.Name;
attribute int GroupOwnership.DateCommenced;
function string Description(string Staff.Name,
int Date GroupOwnership.Datecommenced);
Note that the textual explanations of the attributes and operations are left out of
the description for clarity. Each attribute denotes a relation field and each function
denotes an access routine to the database. The implementation of these features
is transparent to the user. For instance, the function Description() denotes the
access routine that returns the description of all results obtained by a staff member
after a given date. This routine is written in mSQL's C interface. In the case of
an object-oriented database, an attribute denotes a class attribute and a function
denotes either a class method or an access routine. Using WebFINDIT, users
can locate the database, then investigate its exported interface and fetch useful
attributes and functions to access the database.
3.4. Documentations
In addition to the information types that represent the domains of interest of the
database, WebFINDIT documents the database in a way that users can understand
its contents and behavior. This provides a richer and understandable description
of the database. The documentation mainly consists of a set of demonstrations
about what an information type is and what it offers. This demonstration may
be textual or graphical depending on the information being exported. Even if
two databases contain the same information, the information may exhibit different
behaviors depending on the selected database. This is to be expected as a product
may display different properties depending on who the exporter is.
One of the major problems that research solutions to heterogeneity have not adequately
addressed is the problem of understanding the structure and behavior of
different types of information as represented in various databases. Research in this
area has largely been concerned with the static aspects of schema integration. The
idea has been to present users with one uniform view or schema, built on top of
several other schemas [?, ?]. Database administrators are responsible for understanding
the different schemas and then translating them into a schema understood
by local users. It is important to note that this process is a reasonable solution
only if there is a small number of schemas that are fairly similar. In fact, there is a
problem of understanding information even if the data model is the same across all
participating databases. The problem is not acute when the number of databases
is small, as it would be reasonable to assume that enough interaction between designers
would solve the problem of understanding information. This could not work
in an environment like Internet databases.
4. Metadata Support - Co-databases
Co-databases are introduced as a means for implementing our distributed ontology
concept and as an aid to inter-site data sharing. These are metadata repositories
that surrounds each local DBMS, and which know a system's capability and
functionality. Formation of information space relationships (i.e, ontologies and
inter-ontology relationships) and maintenance as well as exploration of these relationships
occur via a special-purpose language called WebTassili. An overview of
the WebTassili language is presented in Section ??.
Locating a set of databases that fits user queries requires detailed information
about the content of each database in the system. To avoid the problem of centralized
administration of information, meta-information repositories are distributed
over information networks. In our approach, each participating database has a co-
database attached to it. A co-database (meta-information repository) is an object-oriented
database that stores information about its associated database, ontologies
and inter-ontology relationships of this database. A set of databases exporting a
certain type of information is represented by a class in the co-database schema.
This also means that an ontology is represented by a class or a hierarchy of classes
(i.e., information type based on a classification hierarchy).
A typical co-database schema contains subschemas that represent ontologies and
inter-ontology relationships that deal with specific types of information (see Figure
??). The first sub-schema consists of a tree of classes where each class represents
a set of databases that can answer queries about a specialized type of information.
This subschema represents ontologies. The class Ontologies Root forms the root
of the ontologies tree. Every subclass of the class Ontologies represent the root
of an ontology tree. Every node in that tree represents a specific information type.
An ontology is hierarchically organized in the form of a tree, so that an information
1type has a number of subordinate information types and at most one superior
information type. This organization allows an ontology to be structured according
to a specialization relationship. For instance, the class Research could have two
Cancer Research and Child Research. The classes joined in the ontology
tree support each other in answering queries directed to them. If a user query
conforms better with the information type of a given subclass, then the query will
be forwarded to this subclass. If no classes are found in the ontology tree while
handling a user query, then either the user simplifies the query or the query is
forwarded to other ontologies (or databases) via inter-ontology relationships. The
splitting of an ontology into smaller units increase the efficiency when searching
information types.
Database-Database Class
Co-database Root Class
Ontologies Root Class
Ontology n Root Class
Ontology 1 Root Class
Ontology-Ontology Class Ontology-Database Class
Ontologies Relationships Root Class Database Relationships Root Class
Inter-Ontology Root Class
Figure
2. The outline of a typical co-database schema
The co-database also contains another type of subschema. This subschema consists
on the one hand, of a subschema of inter-ontology relationships that involve
the ontology the database is member of; and on the other hand of a subschema
of inter-ontology relationships that involve the database itself. Each of these subschemas
consists in turn of two subclasses that respectively describe inter-ontology
relationships with databases and inter-ontology relationships with other ontologies.
In particular, every class in an ontology tree contains a description about the
participating databases and a description about the type of information they con-
tain. Description of the databases will include information about the data model,
operating system, query language, etc. Description of the information type will
include its general structure and behavior. We should also mention that the documentation
(demo) associated with each information instance is stored in actual
databases. This is done for two reasons: (1) Database autonomy is maintained and,
(2) documentations can be modified with little or no overhead on the associated
co-databases.
The class Ontology Root contains the generic attributes that are inherited by all
classes in the ontology tree. A subset of the attributes of the class Ontology Root
is:
Class Ontology Root f
attribute string Information-type;
attribute set(string) Synonyms;
attribute string DBMS;
attribute string Operating-system;
attribute string Query-language;
attribute set(string) Sub-information-types;
attribute set(Inter-ontology Root)
Inter-ontology Relationships;
attribute set(Ontology Root) Members;
The attribute Information-type represents the name of the information-
type (e.g., "Research" for all instances of the class Research). The attribute
Synonyms describes the set of alternative descriptions of each information-
type. Users can use these descriptions to the effect of obtaining databases
that provide information about the associated information type. The attribute
Sub-information-types describes specialization relationship. The other attributes
are self-explained.
Every sub-class of the class Ontology Root has some specific attributes that describe
the domain model of the related set of underlying databases. These attributes
do not necessarily correspond directly to the objects described in any particular
database. For example, a subset of the attributes of the class Research (Project
and Grant are two types defined elsewhere) is:
Class Research Isa Ontology Rootf
attribute string Subject;
attribute set(Project) Projects;
attribute set(Grant) Fundings;
In the
Figure
??, the class Inter-Ontology Root contains the generic attributes
that are relevant to all types of inter-ontology relationships. These relationships
can be used to answer queries when the local ontology cannot answer them. An
inter-ontology relationship can be seen as an intersection (or overlap) relationship
between the related entities. Synonyms and generalization/specialization can be
seen as intra-ontology relationships compared to the inter-ontology relationships.
A subset of attributes of the class Inter-Ontology Root is:
Class Inter-Ontology Root f
attribute set(string) Description;
attribute string Point-of-entry;
attribute string Source;
attribute string Target;
The attribute Description contains the information type that can be provided
using the inter-ontology relationship. Assume that the user queries the ontology
Medical about Medical Insurance. The use of the synonyms and generaliza-
tion/specialization relationships fails to answer the user query. However, the ontology
Medical has an inter-ontology relationship with the ontology Insurance where
the value of the attribute Description is f"Health Insurance", "Medical
Insurance"g. It is clear that this inter-ontology relationship provides the answer
to the user query. The attribute Point-of-entry represents the name of the co-
database that must be contacted to answer the query. The attribute Source and
Target are self-explained.
So far, we presented the metadata model used to describe the ontologies and
their relationships. In what follows we will present how a particular database is
described and linked to its ontologies. For example, the co-database attached to
the Royal Brisbane Hospital contains information about all related ontologies
and inter-ontology relationships. As the Royal Brisbane Hospital is member
of two ontologies Research and Medical, it stores information about these two
ontologies. This co-database contains also information about other ontologies and
databases that have a relationship with these two ontologies and the database itself.
The co-database stores information about the inter-ontology relationships State
Government Funding and Medical Insurance. It stores also access information
of the Royal Brisbane Hospital database, which includes the exported interface
and the Internet address. The interface of a database consists of a set of types
containing the exported operations and a textual description of these types. The
database will be advertised through the co-database by specifying the information
type, the documentation (a file containing multimedia data or a program that plays
a product demonstration), and the access information which includes its location,
the wrapper (a program allowing access to data in the database), and the set of
exported types.
Information Source Royal Brisbane Hospital f
Information Type "Research and Medical"
Documentation "http://www.medicine.qut.edu.au/RBH"
Location "dba.icis.qut.edu.au"
Wrapper "dba.icis.qut.edu.au/WebTassiliOracle"
Interface ResearchProjects, PatientHistory
The URL "http://www.medicine.qut.edu.au/RBH" contains the documentation
about Royal Brisbane Hospital database. It contains any type of presentation
accessible through the Web (e.g., a Java applet that plays a video clip).
WebTassiliOracle is the wrapper needed to access data in the Oracle database using
a WebTassili query. The exported interface contains two types that will be
advertised as explained in the previous section for the Queensland Cancer Fund
database.
The interface of a database can be used to query data stored in this database.
This is possible only after locating this database as a relevant information source.
As pointed out before each sub-class of the class Ontology Root has a set of attributes
that describe the domain model of the underlying databases. In fact, these
attributes can also be used to query data stored in the underlying databases. How-
ever, these attributes do not correspond directly to attributes in database interfaces.
For this reason, we define the relationships between information source types and
ontology domain model attributes. We call these relationships mappings. More
specifically, there exists one mapping for each type in the interface of a database.
The mappings are used to translate an ontology query into a set of queries to the
relevant databases. Note that an attribute (or a function) in the type of a database
may be related to different attributes (or functions) in different ontologies. For
example, the attribute Patient.name of the type PatientHistory may be related
to two domain attributes, one (a1) in the ontology Medical and another (a2) in
the ontology Research. This can be described as follows:
Mapping PatientHistory f
attribute string Patient.Name Is ! Research.a1,
Information sharing is achieved through co-databases communicating with each
other. As mentioned above, a database may belong to more than one ontology. In
this case, its co-database will contain information about all ontologies it belongs
to. Two databases can belong to the same ontology and still have different co-
databases. This is true because these databases might belong to different ontologies
and be involved with different inter-ontology relationships. This is one reason it is
desirable that each database has one co-database attached to it instead of having
one single co-database for each ontology. Database autonomy and high information
availability are other reasons why it is not desirable to physically centralize the
co-database.
5. Language Support - WebTassili
SQL and extensions thereof works best when the database schemas are known to
the user. In that respect, it is not concerned with discovering metadata. Querying
with SQL is done in one single step to get the data. In contrast, access to Internet
databases will happen in two steps and iteratively. In addition, the nature of the
ontology architecture calls for a special handling of ontology and inter-ontology
relationships management. To our knowledge, no language has been developed
to support the access and management of ontologies in the context of Internet
databases.
In what follows, we introduce the main features of the WebTassili language. We
focus on the novel aspects designed specifically for user education and information
source location in distributed ontologies. The language was first introduced in [?].
Subsection ?? demonstrates how WebTassili is used to educate users about the
available information. Subsection ?? shows the power of WebTassili in providing
interaction and communication between ontologies.
WebTassili has been designed to address issues related to the use, design and
evolution of WebFINDIT. The world of users is partitioned into privileged users
and general users. Queries are resolved through an interactive process. Privileged
users, such as administrators, can issue both data definition and data manipulation
operations. The formation of ontologies and inter-ontology relationships, as
well as co-database schema evolution, is achieved through the WebTassili data definition
operations. An ontology is bootstrapped after formal negotiation between
privileged users takes place. The ontology/inter-ontology schema is stored and
maintained in individual co-databases. As co-databases contains replicated data,
schema updates are then propagated to the appropriate co-databases using a pre-defined
set of protocols. This data is then accessed by the component information
repository to provide users with information regarding the structure of the system
and the nature of information sources. General users may issue read-only queries.
The distributed updating and querying of co-databases is achieved via an interface
process which allows interactions with remote sites.
WebTassili consists of both data definition and manipulation constructs. As for
data definition features, it is used to define the different schemas and their intrinsic
relationships. It provides constructs to define classes of information types and their
corresponding relationships. In conventional object-oriented databases, the behavior
of a class is the same for all its instances. WebTassili also provides mechanisms
for defining constraints and firing triggers for evolution purposes. This feature is
used to evolve schemas as well as propagate changes to related schemas. Data definition
queries are only accessible to a selected number of users (administrators).
The formation of a schema is achieved through a negotiation process. In that re-
spect, WebTassili provides features for administrators to form and evolve schemas.
More specifically, WebTassili provides the following data definition operations:
ffl Define classes and objects (structure and behavior).
ffl Define operations for schema evolution.
ffl Define operations for negotiation for schema creation and instantiation.
WebTassili is also used to manipulate the schema states. Users use WebTassili
to query the structure and behavior of meta-information types. Users also use
this query language to query information about participating databases. The manipulation
in WebTassili is on both meta data and actual data. More specifically,
WebTassili provides the following manipulation operations:
ffl Search for an information type.
ffl Search for an information type while providing its structure.
ffl Search for an information type while providing its structure and/or information
about the host databases.
ffl Query remote databases
5.1. User Education and Information Discovery
We consider an example from the distributed ontologies that represent the medical
information domain represented in Figure ??. Assume that physicists in the Royal
Brisbane Hospital are interested in gathering some information about the cancer
disease in Queensland (Research, treatment, costs, insurance, etc.) For that pur-
pose, they will use WebFINDIT in order to gather needed information. They will
go through an interactive process using the WebTassili language.
Suppose now that one of the physicists at Royal Brisbane Hospital queries
WebFINDIT for medical research related to cancer disease. For this purpose, he
or she can starts his or her investigation by submitting the following WebTassili
query:
Find Ontologies With Information Medical Research;
In order to resolve this query, WebFINDIT starts from the ontologies the Royal
Brisbane Hospital is member of and checks if they hold the information. The
system found that one of the local ontologies, the Research ontology, deals with
this type of information. Refinement (if needed) is performed until the specific
information type is found. As the user is interested in more specific information
i.e., medical research on cancer, he or she submit a refinement query (find more
specific information type) as follows:
Display SubClasses of Class Medical Research;
The ontology or class Medical Research shows that it contains the subclasses:
Cancer Research, Child Research and AIDS Research. The user can then decide
to query one of the displayed classes or continue the refinement process. As the
user is interested in the first subclass, she or he issues the following query to display
instances of this subclass:
Display Instance of Class Cancer Research;
The user is then faced with two many instances of the subclass Cancer Research
contained in many databases. Assume that she or he decides to query the Prince
Charles Hospital database which is a instance of that subclass. Before that,
the user can become more knowledgeable about this database using a WebTassili
construct that displays the documentation of this information. WebTassili provides
a construct for displaying the documentation of this information. An example of
this query is:
Display Document of Instance Prince Charles Hospital
Of Class Cancer Research;
Assuming the user finds a database that contains the requested information, attributes
and functions are provided to directly access the database for an instance of
this information type. WebTassili provides users with primitives to manipulate data
drawn from diverse information sources. Users use local functions to directly access
the providing databases to get the actual data. In our example, if the user is interested
in querying the database Queensland Cancer Fund from the class Cancer
Research of the ontology Research, he or she uses the following WebTassili query
to display the interface exported by the database:
Display Access Information of Instance Queensland Cancer Fund
Of Class Cancer Research;
At this point, the user is completely aware of this database. She or he knows
the location of this database and how to access it to get actual data and some
other useful information. The database Queensland Cancer Fund is located at
"nicosia.icis.qut.edu.au" and exports several types. Below is as an example
of an exported type:
Type Funding f
attribute string CancerClassify.CommonName;
function real Amount(string CancerClassify.CommonName);
The function Amount() returns the total budget of a given research project. For
instance, if we are interested in the budget of the research project Lung Cancer,
we use the function Amount("Lung Cancer"). This function is translated to the
following SQL query (the native query language of the underlying database):
Select c.Amount
From CancerClassify a, ResearchGroup b, Funding c
Where
and
and
In the above scenario, the user was involved in a browsing session to discover
the databases of interest inside the ontology Research. If this ontology contains a
large number of databases, then browsing the description (documentation or access
information) of the underlying databases may be unrealistic. In such a case, the user
may be interested to query the data stored in these databases using the domain
attributes of the ontology. For example, to the effect of obtaining the name of
projects related to Cancer, the user can type the following WebTassili query:
Select r.Projects.name
Ontologies Research r
Where r.Subject = "Cancer"
This query is expressed using the domain model of the ontology Research. The
system uses the mappings (as defined in Section ??) to translate this query into a
set of queries to the relevant databases that are members of the ontology.
Assume now that another physicist is interested in querying the system about
private medical insurance. The following query is submitted to the system.
Find Ontologies With Information Medical Insurance;
As usually, WebFINDIT first checks the ontologies the Royal Brisbane
Hospital is member of. The two local ontologies Research and Medical fail to answer
the query. WebFINDIT finds that there is an inter-ontology relationship with
another ontology Insurance that appears to deal with the requested information
type. A point of entry is provided for this ontology. In this way, an inter-ontology
relationship contains the resources that are available to an ontology to answer requests
when they cannot be handled locally. To establish a connection with a
remote ontology, a user uses the following WebTassili query:
Connect To Ontology Insurance;
The user is now able to investigate this ontology looking for more relevant infor-
mation. After some refinements, a database is selected and queried as in the first
part of the example.
5.2. Ontologies Interaction and Negotiation
Ontologies and inter-ontology relationships provide the means for dynamically synchronizing
information source interactions in a decentralized manner. By joining an
ontology, databases implicitly agree to work together. Information sources retain
control, and join or leave ontologies/inter-ontology relationships based upon local
considerations. The forming, joining, and leaving of ontologies and inter-ontology
relationships is controlled by privileged users (database administrators).
In some instances, users may ask about information that is not in the local domain
of interest. If these requests are small, a mapping between a set of information
meta types to an ontology via an inter-ontology relationships is enough to resolve
the query. If the number of requests remains high the database administrator may,
for efficiency reasons, investigate the formation of an ontology with the "popular"
remote databases or join a pre-existing ontology. Alternatively, the database administrator
may initiate a negotiation with other database members to establish an
inter-ontology relationship with an existing ontology or database.
Assume that the Medibank database of the ontology Medical Insurance wants
to establish an inter-ontology relationship with the ontology Medical. To initiate
a negotiation with this ontology, the following WebTassili query is used:
Inquire at Ontology Medical;
To send the requested information (i.e., remote structural information) to the servicing
Medical, the representative (administrator site) database uses the following
query:
Send to Medibank
Object Medical.template;
The negotiation process ends (establishment of an inter-ontology relationship or
not) whenever the involved entities decide so. Other primitives exist to remove
methods and objects when an information source relinquishes access to local in-
formation. There are also more basic primitives which are used to establish an
ontology and propagate and validate changes. Each operation must be validated
by all participating administrators. The instantiation operation is an exception. In
this case the information resource described by an object is the one that decides
what the object state should be. If there is disagreement in the validation process,
the administrator who instigated the operation will choose the course of action to
be taken.
A joining information source must provide some information about the data it
would like to share, as well as information about itself. If the new information
repository is accepted as a member, the administrator of the ontology will then
decide how the ontology schema is to be changed. During this informal exchange,
many parameters need to be set. For instance, a threshold for the minimum and
maximum number of ontology members is negotiated and set. Likewise, a threshold
on the minimum and maximum number of inter-ontology relationships with
information sources and ontologies is also set.
Initially, an administrator is selected to create the root class of the ontology
schema. Once this is done, the root of the schema is sent to every participating
information repository for validation. Based on feedback from the group, the creator
will decide whether to change the object or not. This process will continue until
there is a consensus. Changes are only made at a single site until consensus is
achieved - at which time the change is made persistent and propagated to the
appropriate databases. If existing classes/methods are to be updated, responsibility
lies with the information repository that "owns" it.
An ontology is dismantled by deleting the corresponding subschema in every
participating co-database schema. In addition, all objects that belong to the classes
of that ontology are also deleted. The update of co-databases resulting from inter-ontology
relationship changes is practically the same as defined for ontologies. The
only difference being that changes in ontologies obey a stricter set of rules.
6. Implementation of WebFINDIT
This section presents the overall architecture which supports the WebFINDIT
framework. This architecture adopts a client-server approach to provide services
for interconnecting a large number of distributed, autonomous and heterogeneous
databases. It is based on CORBA and Java technologies. CORBA provides a robust
object infrastructure for implementing distributed applications including multidatabase
systems [?]. These applications are constructed seamlessly from their
components (e.g., legacy systems or new developed systems) that are hosted on
different locations on the network and developed using different programming languages
and operating systems [?]. Interoperability across multi-vendor CORBA
ORBs is provided by using IIOP (Internet Inter-ORB Protocol). The use of IIOP
allows objects distributed over the Internet and connected to different ORBs to
communicate. Java allows user interfaces to be deployed dynamically over the Web.
Java applets can be downloaded onto the user machine and used to communicate
with WebFINDIT components (e.g., CORBA objects). In addition, JDBC can be
used to access SQL relational databases from Java applications. Java and CORBA
offer complementary functionalities to develop and deploy distributed applications.
It should be noted that there are other types of middleware technologies besides
Java/CORBA [?] [?]. Other technologies such as HTTP/CGI approach and
ActiveX/DCOM [?] [?] are also used for developing intranet- and Internet-based
applications. It is recognized that the HTTP/CGI approach may be adequate when
there is no need for sophisticated remote server capabilities and no data sharing
among databases is required. Otherwise, Java/CORBA approach offers several advantages
over HTTP/CGI [?]. We note also that the CORBA's IIOP and HTTP
can run on the same network as both of them uses the Internet as the backbone.
Also, the interoperability between CORBA and ActiveX/DCOM is already a reality
with the beta-version of Orbix COMet Desktop [?]. Thus, the access to Internet
databases interfaced using the CGI/HTML or ActiveX/DCOM will be possible at
a minimal cost.
6.1. System Architecture
The WebFINDIT components are grouped in four layers that interact among themselves
to query a large number of heterogeneous and distributed databases using
a Web-based interface (see Figure ??). The basic components of WebFINDIT are
the query layer, the communication layer, the metadata layer, and the data layer.
The query layer: provides users' access to WebFINDIT services. It has two com-
ponents: The browser and the query processor. The browser is the user's interface
to WebFINDIT. It uses the metadata stored in the co-databases to educate users
about the available information space, locate the information source servers, send
query to remote databases and display their results. The browser is implemented
using Java applets. The query processor receives queries from the browser, coordinates
their execution and returns their results to the browser. The query processor
Query Layer
Communication
Layer
Metadata
Data Layer
Layer
Browser
Query Processor
CORBA
IIOP
IIOP
Co-database
Servers
Database
Servers
Information Source
Interfaces
Figure
3. WebFINDIT Layers
interacts with the communication layer (next layer) which dispatches WebTassili
queries to the co-databases (metadata layer) and databases (data layer). The query
processor is written in Java.
The communication layer: manages the interaction between WebFINDIT
components. It mediates requests between the query processor and co-
database/database servers. The communication layer locates the set of servers
that can perform the tasks. This component is implemented using a network of
IIOP compliant CORBA ORBs, namely, VisiBroker for Java, OrbixWeb, and Or-
bix. By using CORBA, it is possible to encapsulate services (i.e., co-database and
database servers) as sets of distributed objects and associated operations. These
objects provide interfaces to access servers. The query processor communicates
with CORBA ORBs either directly when the ORB is a client/server Java ORB
(e.g., VisiBroker) or via another Java ORB (e.g., using OrbixWeb to communicate
with Orbix).
The metadata layer: consists of a set of co-database servers that store meta-data
about the associated databases (i.e, information type, location, ontologies,
inter-ontology relationships, and so on). Co-databases are designed to respond
to queries regarding available information space and locating sources of an information
type. All co-databases are implemented in ObjectStore (C++ interface).
WebTassili primitives are implemented using methods of ObjectStore schema of the
co-database.
The data layer: has two components: databases and Information Source Interfaces
(ISIs). The current version of WebFINDIT supports relational (mSQL, Oracle,
Sybase, DB2) and object oriented (ObjectStore) databases. An information source
interface provides access to a specific database server. The current implementation
of WebTassili provides : (1) translation of WebTassili queries to the native local
language (e.g., SQL), (2) translation of results from the format of the native system
to the format of WebTassili.
6.2. Hardware and Software Environment
The current implementation of our system is based on Solaris (v2.6), JDK (v1.1.5)
which includes JDBC (v2.0) (used to access the relational databases), three CORBA
products that are IIOP compliant, namely Orbix (v2), OrbixWeb (v3), and VisiBroker
(v3.2) for Java (see Figure ??). These ORBs connect 26 databases (databases
and their co-databases). Each database is encapsulated in a CORBA server object
(a proxy). These databases are implemented using four different DBMSs (rela-
tional and object-oriented systems): Oracle, mSQL, DB2, and ObjectStore. The
user interface is implemented as Java applets that communicate with CORBA ob-
jects. ObjectStore databases are connected to Orbix. Relational databases (stored
in Oracle, mSQL, and DB2) are connected to a Java-interfaced CORBA. Oracle
databases are connected to VisiBroker, whereas mSQL and DB2 are connected to
OrbixWeb. CORBA server objects use:
ffl JDBC to communicate with relational databases. In this case, the CORBA
objects are implemented in Java (OrbixWeb or VisiBroker for Java server ob-
jects).
ffl C++ method invocation to communicate with C++ interfaced object-oriented
databases from C++ CORBA servers (both Orbix and ObjectStore support
6.3. Using a Healthcare Application
In order to illustrate the viability of this architecture and show how to
query global information system using WebFINDIT, we have used a Health-care
application. Healthcare applications provide a very relevant context
where tools such as WebFINDIT can be used. The application supports
Database
DBMS
Co-database DBMS
connection to Orbix
ORB not shown.
Java
Browser
Query Processor
User interface Level
IIOP
Orbix
OrbixWeb
VisiBroker
State Govt.
Funding Brisbane
Royal Medibank
CentreLink
RMIT Qld Cancer
Fund
ATO
Charles
Prince
Ambulance
ORBs
CORBA object with
granularity of a
database
Source Level
mSQL mSQL ObjectStore ObjectStore ObjectStore
JDBC JDBC C++ method
invocation
Oracle Oracle Oracle Oracle DB2
Java
OrbixWeb
Medicare
mSQL DB2
Ontos
RBH Workers
Union
QUT MBF
Figure
4. Detailed Implementation.
queries about healthcare related services and enables a large number of heterogeneous
and autonomous healthcare providers to communicate with each
other. In this application, 13 databases are used: State Government Funding,
RBH - Royal Brisbane Hospital, Centre Link, Medibank, MBF, RMIT Medical
Research, Queensland Cancer Fund, Australian Taxation Office, Medicare,
QUT Research, Ambulance, AMP, and Prince Charles Hospital (see Figure ??).
Typically, a user of this application starts by submitting a query about specific
area in the healthcare domain. Assume that the user submits the WebTassili query
"Display Ontologies With Information Medical Research". The system finds that
both ontologies Medical and Research provide information about Medical and
Research. Assume that the user wishes to display all members of the Research
ontology. This can be done by clicking on the Research ontology. The user can
view the result in the lower half of the left hand side window of the Figure ??.
Figure
5. Display Document on RBH Co-Database
To know more on a particular database, the user can click on the corresponding
icon. For example, when the user clicks on the Royal Brisbane Hospital
database, the available formats of documentation is displayed (e.g., text, HTML)
in the right hand side window of the Figure ??. If the user decides to read the documentation
using HTML, he/she clicks on the HTML button. Figure ?? displays
the content of the HTML file containing the documentation of Royal Brisbane
Hospital database.
After locating and understanding the content of the Royal Brisbane Hospital
database, the user decides to query some actual data in this database. Querying
actual data can alternatively be done with embedded queries using native query
languages of the underlying databases. The user now wants to know about the
Medical Students who are doing internships in the hospital (Medical Students
is a type exported by the database Royal Brisbane Hospital). As the underlying
database support SQL, the user can use the SQL statement "select * from
medical students" to get the required information. Once the definition of the
Figure
6. RBH HTML document displayed
query is accomplished, the query is submitted for execution by clicking on the
Fetch button. Figure ?? shows the result of the query. Note that querying actual
data can be done with WebTassili queries. In this case, the query is decomposed
(if necessary) and mapped to queries in the underlying databases.
Figure
7. Query Result on RBH Database
7. Conclusion
We presented an extensible, dynamic and distributed ontological architecture to
support information discovery in Internet databases. The fundamental constructs
ontologies, inter-ontology relationships, and co-databases provide a flexible infrastructure
for both users and Internet databases to discover and share information in
a seamless fashion. A working prototype has been developed. This prototype was
implemented using popular standards in distributed object computing (CORBA)
and the Web (Java). Several commercial and research databases have been been
used in our testbed. We are currently in the process of assessing the performance
of the prototype.
Acknowledgments
The third author would like to acknowledge the support of the Australian Research
Council (ARC) through a Large ARC Grant number 95-7-191650010.
--R
Retrieving and Integrating Data from Multiple Information Sources.
Query Processing in the SIMS Information Medi- ator
Using Bridging Boundaries: CORBA in perspective.
A comparative analysis of methodologies for database schema integration.
Helaland
Data sharing on the web.
An Overview of Mutlidatabase Systems: Past and Present
The world-wide-web
Large multidatabases: Beyond federation and global schema integra- tion
Large multidatabases: Issues and directions.
Using Java and CORBA for Implementing Internet Databases.
Using Java Applets and CORBA for Multi-User Distributed Ap- plications
Information retrieval on the world wide web.
Agents on the web.
Classifying schematic and data heterogeneity in multi-base systems
A query system for the world wide web.
Querying heterogeneous information sources using source descriptions.
Data model and query evaluation in global information systems.
Dynamic Query Processing in DIOM.
OBSERVER: An Approach for Query Processing in Global Information Systems based on Interoperation across Pre-existing On- tologies
Domain Specific Ontologies for Semantic Information Brokering on the Global Information Infrastructure.
Querying the world wide web.
Relationship merging in schema integration.
Client/Server Programming with JAVA and CORBA.
Dynamic Query Optimization in Multidatabases.
Notable computer networks.
Data structures for efficient broker implementation.
Using Carnot For Enterprise Information Integration.
--TR
--CTR
Chara Skouteli , George Samaras , Evaggelia Pitoura, Concept-based discovery of mobile services, Proceedings of the 6th international conference on Mobile data management, May 09-13, 2005, Ayia Napa, Cyprus
Ahmad Kayed , Robert M. Colomb, Using BWW model to evaluate building ontologies in CGs formalism, Information Systems, v.30 n.5, p.379-398, July 2005
Mourad Ouzzani , Athman Bouguettaya, Query Processing and Optimization on the Web, Distributed and Parallel Databases, v.15 n.3, p.187-218, May 2004 | internet databases;distributed ontologies;information discovery |
348872 | UML-Based integration testing. | Increasing numbers of software developers are using the Unified Modeling Language (UML) and associated visual modeling tools as a basis for the design and implementation of their distributed, component-based applications. At the same time, it is necessary to test these components, especially during unit and integration testing.At Siemens Corporate Research, we have addressed the issue of testing components by integrating test generation and test execution technology with commercial UML modeling tools such as Rational Rose; the goal being a design-based testing environment. In order to generate test cases automatically, developers first define the dynamic behavior of their components via UML Statecharts, specify the interactions amongst them and finally annotate them with test requirements. Test cases are then derived from these annotated Statecharts using our test generation engine and executed with the help of our test execution tool. The latter tool was developed specifically for interfacing to components based on COM/DCOM and CORBA middleware.In this paper, we present our approach to modeling components and their interactions, describe how test cases are derived from these component models and then executed to verify their conformant behavior. We outline the implementation strategy of our TnT environment and use it to evaluate our approach by means of a simple example. | Figure
1: Alternating Bit Protocol Example
The example in Figure 1 represents an alternating bit
communication protocol2 in which there are four separate
components Timer, Transmitter, ComCh (Communication
Channel) and Receiver and several internal as well as external
interfaces and stimuli.
The protocol is a unidirectional, reliable communication protocol.
A user invokes a Transmitter component to send data messages
over a communication channel and to a Receiver component,
which then passes it on to another user. The communication
channel can lose data messages as well as acknowledgements. The
reliable data connection is implemented by observing possible
timeout conditions, repeatedly sending messages, if necessary,
and ensuring the correct order of the messages.
2.1 UML Statecharts
The Unified Modeling Language (UML) is a general-purpose
visual modeling language that is used to specify, visualize,
construct and document the artifacts of a software system.
In this paper, we focus on the dynamic views of UML, in
particular, Statechart Diagrams. A Statechart can be used to
describe the dynamic behavior of a component or should we say
object over time by modeling its lifecycle. The key elements
described in a Statechart are states, transitions, events, and
actions.
States and transitions define all possible states and changes of
state an object can achieve during its lifetime. State changes occur
as reactions to events received from the object's interfaces.
Actions correspond to internal or external method calls.The nomenclature in this paper refers to UML, Revision 1.3.
2 The name Alternating Bit Protocol stems from the message sequence
numbering technique used to recognize missing or redundant messages
and to keep up the correct order.
Figure
2 illustrates the Statechart for the Transmitter object
shown in Figure 1. It comprises six states with a start and an end
state. The transitions are labeled with call event descriptions
corresponding to external stimuli being received from the tuser
interface and internal stimuli being sent to the Timer component
via the timing interface and received from the ComCh component
via the txport interface. These internal/external interfaces and
components are shown in Figure 1. Moreover, the nomenclature
used for labeling the transitions is described in the next section
and relates to the way in which component interactions are
modeled.
_tuser?msg
_txport?ack ^timing! cancel:_tuser!ack
^_txport!data0:_timing!start
_timing?timeout
^_txport!data1:_timing!start
_txport?ack ^_timing!cancel:_tuser!ack _tuser?msg
Figure
2: Statechart Diagram for the Transmitter Object
2.2 Communicating Statecharts
In the following section, we describe how a developer would need
to model the communication between multiple Statecharts, when
using a commercial UML-based modeling tool. At present, UML
does not provide an adequate mechanism for describing the
communication between two components, so we adopted concepts
from CSP (Communicating Sequential Processes) [6] to enhance
its existing notation.
2.2.1 Communication Semantics
In our approach, we wanted to select communication semantics
that most closely relate to the way in which COM/DCOM and
CORBA components interact in current systems. While such
components allow both synchronous and asynchronous
communications, we focus on a synchronous mechanism for the
purposes of this paper.
In addition, there are two types of synchronous communication
mechanisms. The first, the shared (or global) event model, may
broadcast a single event to multiple components, all of which are
waiting to receive and act upon it in unison. The second model, a
point-to-point, blocking communication mechanism, can send a
single event to just one other component and it is only these two
components that are then synchronized. The originator of the
event halts its execution (blocks) until the receiver obtains the
event. It is this point-to-point model that we adopted, because it
most closely resembles the communication semantics of
COM/DCOM and CORBA.
2.2.2 Transition Labeling
In order to show explicit component connections and to associate
operations on the interfaces with events within the respective
Statecharts, we defined a transition labeling convention based on
the notation used in CSP for communication operations3. A
unique name must be assigned by the developer to the connection
between two communicating Statecharts4. This name is used as a
prefix for trigger (incoming) and send (outgoing) events. A
transition label in a Statechart would be defined as follows:
_timing?timeout ^_txport!data0
This transition label can be interpreted as receiving a trigger event
timeout from connection timing followed by a send event
data0 being sent to connection txport. Trigger (also known as
receive) events are identified by a separating question mark,
whereas send events are identified by a leading caret (an existing
UML notation) and a separating exclamation mark. In Figure 3
below, two dark arrows indicate how the timing interface
between the two components is used by the send and receive
events. Connections are considered bi-directional, although it is
possible to use different connection names for each direction, if
the direction needs to be emphasized.
Transitions can contain multiple send and receive events. Multiple
receive events within a transition label can be specified by
separating them with a plus sign. Multiple send events with
different event names can be specified by separating them by a
colon.
2.2.3 Example
Figure
3 shows two communicating Statecharts for the
Transmitter and Timer components. The labels on the transitions
in each Statechart refer to events occurring via the internal timing
interface, the interface txport with the ComCh component and two
external interfaces, timer and tuser.
Figure
3: Communicating Transmitter and Timer Components
The Transmitter component starts execution in state Idle0 and
waits for user input. If a message arrives from connection tuser,
the state changes to PrepareSend0. Now, the message is sent
to the communication channel. At the same time, the Timer
component receives a start event. The component is now in the
3 In CSP, operations are written as channel1!event1 which means
that event1 is sent via channel1. A machine input operation is
written as channel2?event1 where channel2 receives an event1.
4 This is currently a limitation of our tool implementation.
state MessageSent0 and waits until either the Timer
component sends a timeout event or the ComCh component
sends a message acknowledgement ack. In case of a timeout, the
message is sent again and the timer is also started again. If an ack
is received, an event is sent to the Timer component to cancel
the timer and the user gets an acknowledgement for successful
delivery of the message. Now, the same steps may be repeated,
but with a different message sequence number, which is expressed
by the event data1 instead of data0.
In addition to modeling the respective Statecharts and defining the
interactions between them, developers can specify test
requirements, that is, directives for test generation, which
influence the size and complexity of the resulting test suite.
However, this aspect is not shown in this example.
3. Establishing a Global Behavioral Model
In the following section, we describe the steps taken in
constructing a global behavioral model, which is internal to our
tool, from multiple Statecharts that have been defined by a
developer using a commercial UML-based modeling tool. In this
global behavioral model, the significant properties, that is,
behavior, of the individual state machines are preserved.
3.1 Definition of Subsystems
A prime concern with respect to the construction of such a global
model is scalability. Apart from utilizing efficient algorithms to
compute such a global model, we defined a mechanism whereby
developers can group components into subsystems and thus help
to reduce the size of a given model. The benefit of such a
subsystem definition is that it also reflects a commonly used
integration testing strategy described in Section 4.
Our approach allows developers to specify a subsystem of
components to be tested and the interfaces to be tested. If no
subsystem definition has been specified by a developer, then all
modeled components and interfaces are considered as part of the
global model.
3.2 Composing Statecharts
3.2.1 Finite State Machines
We consider Statecharts as Mealy finite state machines; they react
upon input in form of receive events and produce output in form
of send events. Such state machines define a directed graph with
nodes (representing the states) and edges (representing the
transitions). They have one initial state and possibly several final
states. The state transitions are described by a function:
communicating finite state machine used for
component specification is defined as A = (S, M, T, ?, s0, F),
where
S is a set of states, unique to the state machine
are states marked as intermediate states
T is an alphabet of valid transition annotations, consisting of
transition type, connection name and event name. Transition
is a function describing the transitions between
states
s0 ? S is the initial state
F ? S is a set of final states
Initial and final states are regular states. The initial state gives a
starting point for a behavior description. Final states express
possible end points for the execution of a component.
The transition annotations T contain a transition type as well as a
connection name and an event name. Transition types can be
INTernal, SEND, RECEIVE and COMMunication. Transitions of
type SEND and RECEIVE are external events sent to or received
from an external interface to the component's state machine.
SEND and RECEIVE transitions define the external behavior of a
component and are relevant for the external behavior that can be
observed. An INTernal transition is equivalent to a ?-transition
(empty transition) of a finite state machine [7]. It is not triggered
by any external event and has no observable behavior. It
represents arbitrary internal action. COMMunication transitions
are special types of internal transitions representing interaction
between two state machines. Such behavior is not externally
observable. When composing state machines, matching pairs of
SEND and RECEIVE transitions with equal connection and event
names are merged to form COMMunication transitions. For
example, the transitions highlighted by dark arrows in Figure 3
would be such candidates.
The definition of a state machine allows transitions that contain
single actions. Every action expressed by a transition annotation is
interpreted as an atomic action. Component interaction can occur
after each action. If several actions are grouped together without
the possibility of interruption, the states between the transitions
can be marked as intermediate states. Intermediate states (M ? S)
are introduced to logically group substructures of states and
transitions. The semantics of intermediate states provide a
behavioral description mechanism similar to microsteps. Atomic
actions are separated into multiple consecutive steps, the
microsteps, which are always executed in one run. These
microsteps are the outgoing transitions of intermediate states. This
technique is used in our approach as part of the process of
converting the UML Statecharts into an internal representation.
The result is a set of normalized state machines.
Idle
_tuser?msg
^_timing
PrepareSend!cancel
^_timing!start
GotAck
_timing?timeout
_txport!data0 _txport?ack
MessageSent
Figure
4: Normalized Transmitter Component
Figure
4 shows such a state machine for a simplified version of
the Transmitter object. Two additional, intermediate states
TimerOn and GotAck have been inserted to separate the
multiple events _txport!data0:^_timing_start and
txport?ack^timing!cancel between the PrepareSend,
MessageSent and Idle states shown in Figure 2.
3.2.2 Composed State Machines
A composed state machine can be considered as the product of
multiple state machines. It is itself a state machine with the
dynamic behavior of its constituents. As such, it would react and
generate output as a result of being stimulated by events specified
for the respective state machines. Based on the above definition of
a finite state machine, the structure of a composed state machine
can be defined as follows:
?2, s02, sf2) be two state machines and S1 ? . The composed
state machine C = A#B has the following formal definition:
connections
between A and B}
matching events from T1 and T2}
?' is generated from ?1 and ?2 with the state machine
composition schema
For example, a global state for A#B is defined as a two-tuple (s1,
s2), where s1 is a state of A and s2 is a state of B. These two states
are referred to as part states. Initial state and final states of A#B
are element and subset of this product. The possible transition
annotations are composed from the union of T1 and T2 and new
COMMunication transitions that result from the matching
transitions. Excluded are the transitions that describe possible
matches. Either COMMunication transitions are created from
them or they are omitted, because no communication is possible.
3.2.3 Composition Method
A basic approach for composing two state machines is to generate
the product state machine by applying generative multiplication
rules for states and transitions. This leads to a large overhead,
because many unreachable states are produced that have to be
removed in later steps. The resulting product uses more resources
than necessary as well as more computation time for generation
and minimization.
Instead, our approach incorporates an incremental composition
and reduction algorithm that uses reachability computations. A
global behavioral model is created stepwise. Beginning with the
global initial state, all reachable states and all transitions in
between are computed. Every state of the composed state machine
is evaluated only once. Due to the reachability algorithm, the
intermediate data structures are at no time larger than the result of
one composition step. States and transitions within the composed
state machines that are redundant in terms of external observation
are removed. By applying the reduction algorithm using heuristic
rules, it is possible to detect redundancies and to reduce the size
of a composed state machine before the next composition step.
Defined subsystems are processed independently sequentially. For
each subsystem, the composition algorithm is applied. The inputs
for the composition algorithm are data structures representing the
normalized communicating state machines of the specified
components within the current subsystem. The connection
structure between these components is part of these data
structures. The order of the composition steps determines the size
and complexity of the result for the next step and therefore the
effectiveness of the whole algorithm. The worst case for
intermediate composition products is a composition of two
components with no interaction. The maximum possible number
of states and transitions created in this case resembles the product
of two state machines.
It is therefore important to select the most suitable component for
the next composition step. The minimal requirement for the
selected component is to have a common interface with the other
component. This means that at least one connection exists to the
existing previously calculated composed state machine.
A better strategy with respect to minimizing the size of
intermediate results is to select the state machine with the highest
relative number of communication relationships or interaction
points. A suitable selection norm is the ratio of possible
communication transitions to all transitions in a state machine.
The component with the highest ratio exposes the most extensive
interface to the existing state machine and should be selected.
Table
Computing Successor States and
Transitions
This incremental composition and reduction method also specifies
a composition schema. For every combination of outgoing
transitions of the part states, a decision table (shown in Table 1) is
used to compute the new transitions for the composed state
machine.
If a new transition leads to a global state that is not part of the
existing structure of the composed state machine, it is added to an
unmarked list. The transition is added to the global model.
Exceptions exist, when part states are marked as intermediate.
Every reachable global state is processed and every possible new
global transition is inserted into the composed state machine. The
algorithm terminates when no unmarked states remain. This
means that every reachable global state was inserted into the
model and later processed. The schema we used was based on a
composition schema developed by Sabnani et al. [16]. We
enhanced it to include extensions for connections, communication
transitions, and intermediate states.
We are assuming throughout this composition process that the
individual as well as composed state machines have deterministic
behavior. We also ensure that the execution order of all
component actions is sequential. This is important as we then
wish to use the global model to create test cases that are
dependent on a certain flow of events and actions; we want to
generate linear and sequential test cases for a given subsystem.
3.2.4 Complexity Analysis
As we are composing the product of two state machines, the worst
case complexity would be O(n2) assuming n is the number of
states in a state machine. However, our approach often does much
better than this due to the application of the heuristic reduction
rules that can help to minimize the overall size of the global
model during composition and maintain its observational
equivalence [11].
Typically, the reduction algorithm being used has linear
complexity with respect to the number of states [16]. For example,
it was reported that the algorithm was applied to a complex
communication protocol (ISDN Q.931), where it was shown that
instead of generating over 60,000 intermediate states during
composition, the reduction algorithm kept the size of the model to
approximately 1,000 intermediate states. Similar results were
reported during use of the algorithm with other systems. The
algorithm typically resulted in a reduction in the number of
intermediate states by one to two orders of magnitude.
3.2.5 Example
Taking the normalized state machine of the Transmitter
component in Figure 4 and the Timer component in Figure 3, the
composition algorithm needs to perform only one iteration to
generate the global behavioral model in Figure 5.
A global initial state Idle_Stopped is created using the initial
states of the two state machines. This state is added to the list of
unmarked states. The composition schema is now applied for
every state within this list to generate new global states and
transitions until the list is empty. The reachability algorithm
creates a global state machine comprising six states and seven
transitions. Three COMMunication transitions are generated,
which are identified by a hash mark in the transition label
showing the communication connection and event.
The example shows the application of the decision table. In the
first global state, Idle_Stopped, part state Idle has an
outgoing receive transition to PrepareSend using an external
connection. Part state Stopped has also an outgoing receive
transition to Running with a connection to the other component.
According to the Decision Rule #4 of the table, the transition with
the external connection is inserted into the composed state
machine and the other transition is ignored. The new global
receive transition leads to the global state
PrepareSend_Stop.
For the next step, both part states include transitions, which use
internal connections. They communicate via the same connection
timing and the same event - these are matching transitions.
According to Decision Rule #1 of the table, a communication
transition is included in the composed state machine that leads to
the global state TimerOn_Running. These rules are applied
repeatedly until all global states are covered.
_tuser?msg
^_timing#cancel _timing#start
TimerOn_Running ^_timing#timeout
MessageSent_Running
_txport?ack _timer?extTimeout
GotAck_Running MessageSent_Timeout
Figure
5: Global Behavioral Model for the TransmitterTimer
Subsystem
4. Test Generation and Execution
In the preceding sections, we discussed our approach to modeling
individual or collections of components using UML Statecharts,
and establishing a global behavioral model of the composed
Statecharts. In this section, we show how this model can be used
as the basis for automatic test generation and execution during
unit and integration testing.
4.1 Unit and Integration Testing
After designing and coding each software component, developers
perform unit testing to ensure that each component correctly
implements its design and is ready to be integrated into a system
of components. This type of testing is performed in isolation from
other components and relies heavily on the design and
implementation of test drivers and test stubs. New test drivers and
stubs have to be developed to validate each of the components in
the system.
After unit testing is concluded, the individual components are
collated, integrated into the system, and validated again using yet
another set of test drivers. At each level of testing, a new set of
custom test drivers is required to stimulate the components. While
each component may have behaved correctly during unit testing, it
may not do so when interacting with other components.
Therefore, the objective of integration testing is to ensure that all
components interact and interface correctly with each other, that
is, have no interface mismatches. This is commonly referred to as
bottom-up integration testing.
Our approach aims at minimizing the testing costs, time and effort
associated with initially developing customized test drivers, test
stubs, and test cases as well as repeatedly adapting and rerunning
them for regression testing purposes at each level of integration.
4.2 Test Generation
Before proceeding with a description of the test generation and
execution steps, we would like to emphasize the following:
Our approach generates a set of conformance tests. These test
cases ensure the compliance of the design specification with
the resulting implementation.
? It is assumed that the implementation behaves in a
deterministic and externally controllable way. Otherwise, the
generated test cases may produce incorrect results.
4.2.1 Category-Partition Method
For test generation, we use the Test Development Environment
(TDE), a product developed at Siemens Corporate Research [1].
TDE processes a test design written in the Test Specification
Language (TSL). This language is based on the category-partition
method, which identifies behavioral equivalence classes within
the structure of a system under test.
A category or partition is defined by specifying all possible data
choices that it can represent. Such choices can be either data
values or references to other categories or partitions, or a
combination of both. The data values may be string literals
representing fragments of test scripts, code, or case definitions,
which later can form the contents of a test case.
A TSL test design is now created from the global behavioral
model by mapping its states and transitions to TSL categories or
partitions, and choices. States are the equivalence classes and are
therefore represented by partitions. Each transition from the state
is represented as a choice of the category/partition. Only partitions
are used for equivalence class definitions, because paths through
the state machine are not limited to certain outgoing transitions
for a state; this would be the case when using a category. Each
transition defines a choice for the current state, combining a test
data string (the send and receive event annotations) and a
reference to the next state. A final state defines a choice with an
empty test data string.
4.2.2 Generation Procedure
A recursive, directed graph is built by TDE that has a root
category/partition and contains all the different paths of choices to
plain data choices. This graph may contain cycles depending on
the choice definitions and is equivalent to the graph of the global
state machine. A test frame, that is, test case is one instance of the
initial data category or partition, that is, one possible path from
the root to a leaf of the (potentially infinite) reachability tree for
the graph.
An instantiation of a category or partition is a random selection of
a choice from the possible set of choices defined for that
category/partition. In the case of a category, the same choice is
selected for every instantiation of a test frame. This restricts the
branching possibilities of the graph. With a partition, however, a
new choice is selected at random with every new instantiation.
This allows full branching within the graph and significantly
influences test data generation. The contents of a test case consist
of all data values associated with the edges along a path in the
graph.
4.2.3 Coverage Requirements
The TSL language provides two types of coverage requirements:
Generative requirements control which test cases are
instantiated. If no generative test requirements are defined,
no test frames are created. For example, coverage statements
can be defined for categories, partitions and choices.
Constraining requirements cause TDE to omit certain
generated test cases. For example, there are maximum
coverage definitions, rule-based constraints for
category/partition instantiation combinations, instantiation
preconditions and instantiation depth limitations. Such test
requirements can be defined globally within a TSL test
design or attached to individual categories, partitions or
choices.
TDE creates test cases in order to satisfy all specified coverage
requirements. Input sequences for the subsystem are equivalent to
paths within the global behavioral model that represents the
subsystem, starting with the initial states. Receive transitions with
events from external connections stimulate the subsystem. Send
transitions with events to external connections define the resulting
output that can be observed by the test execution tool. All
communication is performed through events. For unit test
purposes, the default coverage criterion is that all transitions
within a Statechart must be traversed at least once. For integration
testing, only those transitions that involve component interactions
are exercised. If a subsystem of components is defined as part of
the modeling process, coverage requirements are formulated to
ensure that those interfaces, that is, transitions are tested.
4.2.4 Example
Figure
6 presents the test case that is derived from the global
behavioral model shown in Figure 5. This one test case is
sufficient to exercise the interfaces, txport, tuser and
timer defined for the components. Each line of this generic test
case format represents either an input event or an expected output
event. We chose a test case format where the stimulating events
and expected responses use the strings SEND and RECEIVE
respectively, followed by the connection and event names.
Currently, the events have no parameters, but that will be
remedied in future work.
Figure
Test Case for TransmitterTimer Subsystem
The Sequence Diagrams for the execution of this test case are
shown in Figure 7. Note that the external connection timer has a
possible event extTimeout. This event allows a timeout to be
triggered without having a real hardware timer available.
Receive User
msg
start
data0 msg
ackack
data0
ackcancel
(a) Successful Transmission
Receive User
msg
start
data0
msg
ack
ack
extTimeout
data0
timeout
data0
cancelack
(b) Timed Out Transmission
Figure
7: Sequence Diagrams for the Example
4.3 Test Execution
In this section, we show how the generated test cases can be
mapped to the COM/CORBA programming model. We describe
how an executable test driver (including stubs) is generated out of
such test cases.
As seen earlier, a test case consists of a sequence of SEND and
RECEIVE events such as the following:
The intent of the SEND event is to stimulate the object under test.
To do so, the connection _tuser is mapped to an object
reference, which is stored in a variable _tuser defined in the
test case5. The event msg is mapped to a method call on the object
referenced by _tuser.
The RECEIVE event represents a response from the object under
test, which is received by an appropriate sink object. To do so, the
connection _txport is mapped to an object reference that is
stored in a variable _txport. The event data0 is mapped to a
callback, such that the object under test fires the event by calling
back to a sink object identified by the variable _txport. The
sink object thus acts as a stub for an object that would implement
the txport interface on the next higher layer of software.
5 In the current implementation of TnT, the initialization code that
instantiates the Transmitter object and stores the object reference in the
variable _tuser has to be written manually.
Typically, reactive software components expose an interface that
allows interested parties to subscribe for event notification6.
Layer X+1
Test Driver
:Sink
1. msg()
txport
_txport
Layer X
tuser
_tuser
2. data0()
:Transmitter
Figure
8: Interaction with the Object under Test
The interactions between the test execution environment and the
Transmitter object are shown in Figure 8. The TestDriver calls the
method msg() on the Transmitter object referenced through the
variable _tuser. The Transmitter object notifies the sink object
via its outgoing _txport interface.
Test case execution involving RECEIVE events not only requires
a comparison of the out-parameters and return values with
expected values, but also the evaluation of event patterns. These
event patterns specify which events are expected in response to
particular stimuli, and when they are expected to respond by. To
accomplish this, the sink objects associated with the test cases
need to be monitored to see if the required sink methods are
invoked.
5. Implementation of TnT
The TnT environment was developed at Siemens Corporate
Research in order to realize the work described above. This
design-based testing environment consists of two tools, our
existing test generation tool, TDE with extensions for UML
(TDE/UML) and TECS, the test execution tool. Thus, the name -
TnT. Our new environment interfaces directly to the UML
modeling tools, Rose2000 and Rose Real-Time 6.0, by Rational
Software. Figure 9 shows how test case generation can be initiated
from within Rational Rose.
In this section, we briefly describe our implementation strategy.
5.1 TDE/UML
Figure
depicts the class diagram for TDE/UML. TDE/UML
accesses both Rose applications through Microsoft COM
interfaces. In fact, our application implements a COM server, that
is, a COM component waiting for events. We implemented
TDE/UML in Java using Microsoft's Visual J++ as it can generate
Java classes for a given COM interface. Each class and interface
of the Rose object model can thus be represented as a Java class;
data types are converted and are consistent. The Rose applications
export administrative objects as well as model objects, which
represent the underlying Rose repository.
Rose also provides an extensibility interface (REI) to integrate
external tools known as Add-Ins. A new tool, such as TDE/UML
can be installed within the Rose application as an Add-In and
invoked via the Rose Tool menu. Upon invocation, the current
Rose object model is imported including the necessaryIn the current implementation of TnT, the initialization code for
instantiating a sink object and registering it with the Transmitter
component has to be written manually.
Statecharts, processed using the techniques described in previous
sections, and the files needed for test generation and test
execution generated.
Figure
9: Generating Tests from within Rational Rose
Figure
10: Class Diagram for TDE/UML
5.2 TECS
The Test Environment for Distributed Component-Based
Software (TECS) specifically addresses test execution. While the
test generation method described in Section 4.2 can only support
components communicating synchronously, TECS already
supports both synchronous and asynchronous communication7.
The test environment is specifically designed for testing COM or
CORBA components during unit and integration testing. The
current version of TECS supports the testing of COM
components. It can be used as part of the TnT environment or as a
standalone tool, and includes the following features:With asynchronous communication, a component under test can send
response events to a sink object at any time and from any thread.
Test Harness Library ? this is a C++ framework that
provides the basic infrastructure for creating the executable
test drivers.
? Test Case Compiler ? it is used to generate test cases in
C++ from a test case definition such as the one illustrated in
Figure
6. The generated test cases closely co-operate with the
Test Harness Library. A regular C++ compiler is then used to
create an executable test driver out of the generated code and
the Test Harness Library. The generated test drivers are
COM components themselves, exposing the interfaces
defined through the TECS environment.
used to generate C++ sink classes out
of an IDL interface definition file. The generated sink classes
also closely co-operate with the Test Harness Library.
Test provides the user a means of
running test cases interactively through a graphical user
interface or in batch mode. The information generated during
test execution is written into an XML-based tracefile. The
Test Control Center provides different views of this data
such as a trace view, an error list, and an execution summary.
Further views can easily be defined by writing additional
XSL style sheets.
6. Evaluating the Example
In this section, we describe an evaluation of our approach using
the alternating bit protocol example. As discussed in Section 2,
the example comprises of four components, each with its own
Statechart and connected using the interfaces depicted in Figure 1.
We are currently applying this approach to a set of products
within different Siemens business units, but results from our
experimentation are not yet available. We are aiming to examine
issues such as the fault detection capabilities of our approach.
6.1.1 Component Statistics
Table
2 shows the number of states and transitions for the four
Statecharts before and after they were imported into TDE/UML
and converted into a normalized global model by the composition
steps described in Section 3.2. We realize that the size of these
components is moderate, but we use them to highlight a number
of issues. For the example, the normalized state machine for each
component is never more than twice the size of its associated
UML Statechart.
Table
Component Statistics
6.1.2 Defining an Integration Test
An important decision for the developer is the choice of an
appropriate integration test strategy. Assuming that a bottom-up
integration test strategy is to be used, a developer may wish to
integrate the Transmitter and Timer components first
followed by the Receiver and Comch components. Afterwards,
the two subsystems would be grouped together to form the
complete system. In this case, only the interface between the two
subsystems, txport, would need to be tested. Below, we show
the subsystem definitions for the chosen integration test strategy.
subsystem TransmitterTimer {
components: Transmitter,
subsystem ComchReceiver {
components: Comch, Receiver; }
subsystem ABProtocol {
components: Transmitter, Timer, Comch,
interface: txport; }
6.1.3 Applying the Composition and Reduction Step
The time taken for the import of these four Statecharts as well as
the execution time for the composition algorithm was negligible.
Table
3 shows the number of states/transitions created during the
composition step as well as the values for when the reduction step
is not applied. Typically, the reduction algorithm is applied after
each composition step.
The values in italic show combinations of components with no
common interface. The numbers for these combinations are very
high as would be expected. Such combinations are generally not
used as intermediate steps. The values in bold indicate the number
of states/transitions used for the above integration test strategy.
The values show how the number of states/transitions can be
substantially reduced as in the case of all four components being
evaluated together as a complete system.
Table
3: Size of Intermediate Results
For this example, when composing a model without the
intermediate reduction steps and instead reducing it after the last
composition step, the same number of states and transitions are
reached. The difference, however, lies in the size of the
intermediate results and the associated higher execution times.
While in this case, the benefit of applying the reduction algorithm
were negligible due to the size of the example, theoretically it
could lead to a significant difference in execution time.
6.1.4 Generating and Executing the Test Cases
The time taken to generate the test cases for all three subsystems
in this example took less than five seconds. TDE/UML generated
a total of 7 test cases for all three subsystems ? one test case for
the subsystem TransmitterTimer, three test cases for subsystem
ComchReceiver and three test cases for ABProtocol. In contrast,
an integration approach in which all four components were tested
at once with the corresponding interfaces resulted in a total of 4
tests. In this case, the incremental integration test strategy resulted
in more test cases being generated than the big-bang approach,
but smaller integration steps usually result in a more stable system
and a higher percentage of detected errors. An examination of the
generated test cases shows that they are not free of redundancy or
multiple coverage of communication transitions, but they come
relatively close to the optimum.
7. Related Work
Over the years, there have been numerous papers dedicated to the
subject of test data generation [1,3,8,13,17,19,21]. Moreover, a
number of tools have been developed for use within academia and
the commercial market. These approaches and tools have been
based on different functional testing concepts and different input
languages, both graphical and textual in nature.
However, few received any widespread acceptance from the
software development community at large. There are a number of
reasons for this. First, many of these methods and tools required a
steep learning curve and a mathematical background. Second, the
modeling of larger systems beyond single components could not
be supported, both theoretically and practically. Third, the design
notation, which would be used as a basis for the test design, was
often used only in a particular application domain, for example,
SDL is used predominantly in the telecommunications and
embedded systems domain.
However, with the widespread acceptance and use of UML
throughout the software development community as well as the
availability of suitable tools, this situation may be about to
change. Apart from our approach, we know of only one other
effort in this area. Offutt et al. [12] present an approach similar to
ours in that they generate test cases from UML Statecharts.
However, their approach has a different focus in that they examine
different coverage requirements and are only able to generate tests
for a single component. Furthermore, they do not automate the
test execution step in order for developers to automatically
generate and execute their tests. In addition, they do not
specifically address the problems and issues associated with
modeling distributed, component-based systems.
8. Conclusion and Future Work
In this paper, we described an approach that aims at minimizing
the testing costs, time and effort associated with developing
customized test drivers and test cases for validating distributed,
component-based systems.
To this end, we describe and realize our test generation and test
execution technology and integrate it with a UML-based visual
modeling tool. We show how this approach supports both the unit
and integration testing phases of the component development
lifecycle and can be applied to both COM- and CORBA-based
systems. We briefly outline our implementation strategy and
evaluate the approach using the given example. In the following
paragraphs, we focus on some of the issues resulting from this
work.
Software systems, especially embedded ones, use asynchronous
communication mechanisms with message queuing or shared
(global) messages instead of the synchronous communication
mechanism adopted by our approach. Asynchronous
communication is more complex to model, because it requires the
modeling of these queued messages and events. Furthermore,
communication buffers must be included, when modeling and
composing. Dependent on the implementation, the size of the
event queue can be limited or not. If not, mechanisms have to be
implemented to detect the overflow of queues. When generating
test cases for asynchronously communicating systems, the
complexity may quickly lead to scalability problems that would
need to be examined and addressed in future work. Methods for
asynchronously communicating systems are presented in [5,9, 20].
Component interaction is modeled by our approach using an event
(message) exchange containing no parameters and values. Future
work will result in the modeling of 'parameterized'
communication. To achieve this, the model specification must be
enhanced with annotations about possible data values and types as
well as test requirements for these values. TDE allows test case
generation using data variations with samples out of a possible
range of parameter values. Pre- and post-conditions can constrain
valid data values. These constraints can be checked during test
execution, which extends the error detecting possibilities.
UML allows users to model Statecharts with hierarchical state
machines and concurrent states. While the global behavioral
model presented in this paper can model components with nested
states and hierarchical state machines, the internal data conditions
of these state machines (meaning the global state machine
variables) influencing the transition behavior are not supported.
Concurrent states are also not supported as yet.
In future work, we hope to support the developer with an optimal
integration test strategy. By examining the type and extent of the
interactions between components, our environment could provide
suggestions to the developer as to the order in which components
need to be integrated. This could include analyses of the
intermediate composition steps as well as an initial graphical
depiction of the systems and its interfaces. Such an approach
could significantly influence the effectiveness, efficiency and
quality of the test design.
When modeling real-time systems, timing aspects and constraints
become essential. In future work, we hope to analyze real-time
modeling and testing requirements. For instance, test cases could
be annotated with real-time constraints. Assertions or post-conditions
within the model could also contain such information
which could be checked during test execution.
9.
Acknowledgements
We would like to thank Tom Murphy, the Head of the Software
Engineering Department at Siemens Corporate Research as well
as Professor Manfred Broy and Heiko L?tzbeyer at the Technical
University, Munich.
10.
--R
Automatic Generation of Test Scripts from Formal Test Specifications
Reinhold
Testing Refinements by Refining Tests
Distributed Component Systems: The New Computing Model
One test case generation from asynchronously communicating state machines
Prentice Hall
Introduction to Automata Theory
The Automatic Generation of Test Data
Kang S.
Enterprise Java Beans Specification
Communication and Concurrency
Generating Test Cases from UML Specifications.
T: The Automatic Test Case Data Generator
The Unified Modeling Language Reference Manual
Lapone Aleta M.
Trace Analysis
Component Software.
A Case Study in Statistical Testing of Reuseable Concurrent Objects
Sahay P.
--TR
Communicating sequential processes
Software testing techniques (2nd ed.)
Object-oriented modeling and design
Communication and concurrency
Predicate-based test generation for computer programs
Component software
The Unified Modeling Language reference manual
A computer system for generating test data using the domain strategy
Introduction To Automata Theory, Languages, And Computation
A Case Study in Statistical Testing of Reusable Concurrent Objects
Testing Refinements by Refining Tests
--CTR
Kansomkeat , Wanchai Rivepiboon, Automated-generating test case using UML statechart diagrams, Proceedings of the annual research conference of the South African institute of computer scientists and information technologists on Enablement through technology, p.296-300, September 17-19,
Marlon Vieira , Johanne Leduc , Bill Hasling , Rajesh Subramanyan , Juergen Kazmeier, Automation of GUI testing using a model-driven approach, Proceedings of the 2006 international workshop on Automation of software test, May 23-23, 2006, Shanghai, China
Mass Soldal Lund , Ketil Stlen, Deriving tests from UML 2.0 sequence diagrams with neg and assert, Proceedings of the 2006 international workshop on Automation of software test, May 23-23, 2006, Shanghai, China
J. Jenny Li , David Weiss , Howell Yee, Code-coverage guided prioritized test generation, Information and Software Technology, v.48 n.12, p.1187-1198, December, 2006
Mauro Pezz , Michal Young, Testing Object Oriented Software, Proceedings of the 26th International Conference on Software Engineering, p.739-740, May 23-28, 2004
Andr L. L. de Figueiredo , Wilkerson L. Andrade , Patrcia D. L. Machado, Generating interaction test cases for mobile phone systems from use case specifications, ACM SIGSOFT Software Engineering Notes, v.31 n.6, November 2006
Dianxiang Xu , Xudong He, Generation of test requirements from aspectual use cases, Proceedings of the 3rd workshop on Testing aspect-oriented programs, p.17-22, March 12-13, 2007, Vancouver, British Columbia, Canada
A. Hartman , K. Nagin, The AGEDIS tools for model based testing, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
P. V.R. Murthy , P. C. Anitha , M. Mahesh , Rajesh Subramanyan, Test ready UML statechart models, Proceedings of the 2006 international workshop on Scenarios and state machines: models, algorithms, and tools, May 27-27, 2006, Shanghai, China
G. Friedman , A. Hartman , K. Nagin , T. Shiran, Projected state machine coverage for software testing, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Philip Samuel , Rajib Mall , Pratyush Kanth, Automatic test case generation from UML communication diagrams, Information and Software Technology, v.49 n.2, p.158-171, February, 2007 | UML statecharts;COM/DCOM;test generation;CORBA;distributed components;test execution;functional testing |
348916 | Which pointer analysis should I use?. | During the past two decades many different pointer analysis algorithms have been published. Although some descriptions include measurements of the effectiveness of the algorithm, qualitative comparisons among algorithms are difficult because of varying infrastructure, benchmarks, and performance metrics. Without such comparisons it is not only difficult for an implementor to determine which pointer analysis is appropriate for their application, but also for a researcher to know which algorithms should be used as a basis for future advances.This paper describes an empirical comparison of the effectiveness of five pointer analysis algorithms on C programs. The algorithms vary in their use of control flow information (flow-sensitivity) and alias data structure, resulting in worst-case complexity from linear to polynomial. The effectiveness of the analyses is quantified in terms of compile-time precision and efficiency. In addition to measuring the direct effects of pointer analysis, precision is also reported by determining how the information computed by the five pointer analyses affects typical client analyses of pointer information: Mod/Ref analysis, live variable analysis and dead assignment identification, reaching definitions analysis, dependence analysis, and conditional constant propagation and unreachable code identification. Efficiency is reported by measuring analysis time and memory consumption of the pointer analyses and their clients. | INTRODUCTION
Programs written in languages with pointers can be troublesome
to analyze because the memory location accessed
through a pointer is not known by inspecting the statement.
To effectively analyze such languages, knowledge of pointer
behavior is required. Without such knowledge, conserva-
This copy is posted by permission of ACM and may not be redis-
tributed. The official citation of this work is Hind, M., and Pioli, A.
2000. Which Pointer Analysis Should I Use? ACM SIGSOFT International
Symposium on Software Testing and Analysis
August 22-25, 2000.
Copyright 2000 ACM 1-58113-266-2/00/008.$5.00
tive assumptions about memory locations accessed through
a pointer must be made. These assumptions can adversely
affect the precision and efficiency of any analysis that requires
this information, such as a program understanding
system, an optimizing compiler, or a testing tool.
A pointer analysis is a compile-time analysis that attempts
to determine the possible values of a pointer. As such an
analysis is, in general, undecidable [16, 28], many approximation
algorithms have been developed that provide a trade-off
between the efficiency of the analysis and the precision
of the computed solution. The worst-case time complexities
of these analyses range from linear to exponential. Because
such worst-case complexities are often not a true indication
of analysis time, many researchers provide empirical results
of their algorithms. However, comparisons among results
from different researchers can be difficult because of differing
program representations, benchmark suites, and preci-
sion/efficiency metrics. In this work, we describe a comprehensive
study of five widely used pointer analysis algorithms
that holds these factors constant, thereby focusing more on
the efficacy of the algorithms and less on the manner in
which the results were obtained.
The main contributions of this paper are the following:
ffl empirical results that measure the precision and efficiency
of five pointer alias analysis algorithms with
varying degrees of flow-sensitivity and alias data struc-
tures: Address-taken, Steensgaard [34], Andersen [1],
Burke et al. [4, 12], Choi et al. [5, 12];
ffl empirical data on how the pointer analyses solutions
affect the precision and efficiency of the following client
analyses: Mod/Ref, live variable analysis and dead
assignment identification, reaching definition analysis,
dependence analysis, and interprocedural conditional
constant propagation and unreachable code identification
The results show (1) Steensgaard's analysis is significantly
more precise than the Address-taken analysis in terms of direct
precision and client precision, (2) Andersen's and Burke
et al.'s analyses provide the same level of precision and a
modest increase in precision over Steensgaard's analysis, (3)
the flow-sensitive analysis of Choi et al. offers only a minimal
increase in precision over the analyses of Andersen and
Burke et al. using a direct metric and little or no precision
improvement in client analyses, and (4) increasing the precision
of pointer information reduces the client analyses' in-
put, resulting in significant improvement in their efficiency.
The remainder of this paper is organized as follows. Section
describes background for this work and describes how
it differs from similar studies. Section 3 provides an overview
of the five pointer algorithms. Section 4 summarizes the
client analyses. Section 5 describes the empirical study and
discusses the results. Section 6 describes related work and
Section 7 summarizes the conclusions.
2. BACKGROUND
A pointer alias analysis attempts to determine when two
pointer expressions refer to the same storage location. For
example, if p and q both point to the same storage location,
we say \Lambdap and \Lambdaq are aliases, written as h\Lambdap; \Lambdaq i. A points-to
analysis attempts to determine what storage location a
pointer can point to. This information can then be used to
determine the aliases in the program. This works uses the
compact representation [5, 12] of alias information, which
shares the property of the points-to representation [8], in
that it captures the "edge" characteristic of alias relations. 1
For example, if variable a points to b, and b points to c, the
compact representation records only the following alias set:
cig, from which it can be inferred that h a; ci
and h a; \Lambdabi are also aliases. The cost and time when such
information is inferred can affect the precision and efficiency
of the analysis [22, 12, 20].
Interprocedural data-flow analysis can be classified as flow-sensitive
or flow-insensitive, depending on whether control-flow
information of a procedure is used during the analysis
[23]. By not considering control flow information, and
therefore computing a conservative summary, a flow-insensitive
analysis can be more efficient, but less precise than a
flow-sensitive analysis. In addition to flow-sensitivity, there
are several other factors that affect cost/precision trade-offs
including
Context-sensitivity: Is calling context considered when
analyzing a function?
Heap modeling: Are objects named by allocation site or
is a more sophisticated shape analysis performed?
Struct modeling: Are components distinguished or collapsed
into one object?
Alias representation: Is an explicit alias representation
or a points-to/compact representation used?
This work holds these factors constant, choosing the most
popular and efficient alternatives in each case, so that the
results only vary the usage of flow-sensitivity. In particular,
all analyses are context-insensitive, name heap objects based
on their allocation site, collapse aggregate components, and
use the compact/points-to representation.
This work differs from previous studies [33, 35, 7, 21] in the
following ways:
1 The minor difference between the compact and points-to
representations [12] is not relevant to this work.
ffl The breadth of pointer algorithms studied; in the only
two studies [35, 21] that also include a flow-sensitive
analysis, the analysis they study [18] also benefits from
being context sensitive and uses a different alias representation
(an explicit one) than the (points-to) flow-insensitive
analyses it is compared with.
ffl The number of client analyses reported; this work is
the first to report how reaching definitions, flow de-
pendences, and interprocedural constant propagation
are affected by the quality of pointer analysis.
ffl The reporting of memory usage, which is an important
aspect in evaluating the scalability of interprocedural
data-flow analyses.
3. POINTER ANALYSES
The algorithms we consider, listed in order of increasing
precision, are
Address-taken: a flow-insensitive algorithm often used in
production compilers that records all variables whose
addresses have been assigned to another variable. This
set includes all heap objects and actual parameters
whose addresses are stored in the corresponding for-
mal. This analysis is efficient because it is linear in
the size of the program and uses a single solution set,
but can be very imprecise.
Steensgaard [34]: a flow-insensitive algorithm that computes
one solution set for the entire program and employs
a fast union-find [36] data structure to represent
all alias relations. This results in an almost linear time
algorithm that makes one pass over the program. Similar
algorithms are discussed in [42, 24, 2].
Andersen [1]: an iterative implementation of Andersen's
context-insensitive flow-insensitive algorithm, which was
originally described using constraint-solving [1]. Although
it also uses one solution set for the entire pro-
gram, it can be more precise than Steensgaard's algorithm
because it does not perform the merging required
by the union-find data structure. However, it
does require a fixed-point computation over all pointer-
related statements that do not produce constant alias
relations.
Burke et al. [4, 12]: a flow-insensitive algorithm that also
iterates over all pointer-related statements in the pro-
gram. It differs from Andersen's analysis in that it
computes an alias solution for each procedure, requiring
iteration within each function in addition to iteration
over the functions. A worklist is used in the latter
case to improve efficiency. Distinguishing alias sets
for each function allows precision-improving enhancements
such as using precomputed kill information [4,
et al.'s analysis can be more precise than
Andersen's analysis because it can filter alias information
based on scoping, i.e., formals and locals from
provably nonactive functions are not considered. It
This particular enhancement never improved precision over
the Burke et al.'s analysis studied in this paper [13]. Thus,
the enhanced version of Burke et al.'s analysis that uses
precomputed kill information is not included in this study.
void main() f
S5:
void f() f
void g(T fp) f
S9: if (.)
Figure
1: Example program
may be less efficient because it computes a solution
set for each function, rather than one for the whole
program.
Choi et al. [5, 12]: a flow-sensitive algorithm that computes
a solution set for every program point. It associates
alias sets with each CFG node in the program
and uses worklists for efficiency [13].
All analyses incorporate (optimistic) function pointer analysis
during the alias analysis by resolving indirect call sites
as the analysis proceeds [8, 4].
In theory, each subsequent analysis is more precise (and
costly) than its predecessors. This paper will help quantify
not only these characteristics, but also how client analyses
are affected by the precision of the pointer analyses.
Consider the program in Figure 1, where main calls f and
g, and f also calls g. The Address-taken analysis computes
only one set of objects that it assumes all pointers may point
to: fheapS1 , heapS4 , heapS6 , heapS8 , local, p, qg, all of
which will appear to be referenced at S5.
Steensgaard's analysis unions two objects that are pointed-to
by the same pointer into one object. This leads to the
unioning of the points-to sets of these formerly distinct ob-
jects. This unioning removes the necessity of iteration from
the algorithm. In the example, the formal parameter of g,
may point to either p or q, resulting in p and q being
unioned into one object. Thus, it appears that they both can
point to the heap objects that either can point to: heapS1 ,
local. At the dereference of S5, these
four objects are reported aliased to \Lambdap.
Andersen's analysis also keeps one set of aliases that can
hold anywhere in the program, but it does not merge objects
that have a common pointer point to them. This leads to
local being reported as aliased to \Lambdap.
Burke et al.'s analysis associates one set with every function,
which conservatively represents what may hold at any CFG
node in the function, without considering control flow within
the function. This distinction allows the removal of objects
that are no longer active, such as local in functions main
and f. This leads to heapS1 and heapS4 being aliased to \Lambdap
at S5.
Choi et al.'s analysis associates an alias set before (Inn)
and after (Outn ) every CFG node, n. For example, OutS1
because \Lambdap and heapS1 refer to the same
storage after S1. Choi et al.'s analysis will compute
fh\Lambdap; heapS4ig, which is the precise solution for this simple
example.
This example illustrates the theoretical precision/efficiency
levels of the five analyses we study, from Address-taken
(least precise) to Choi et al.'s (most precise). The Address-
taken analysis is our most efficient analysis because it is
linear and uses only one set. Steensgaard's analysis also
uses one set and is almost linear. The other three analyses
require iteration, but differ in the amount of information
stored from one alias set per program (Andersen), one set
per function (Burke et al.), and two per CFG Node (Choi
et al. 3
The analyses have been implemented in the NPIC system,
an experimental program analysis system written in C++.
The system uses multiple and virtual inheritance to provide
an extensible framework for data-flow analyses [14, 26]. A
prototype version of the IBM VisualAge C++ compiler [15]
is used as the front end. The analyzed program is represented
as a program call (multi-) graph (PCG), in which a
node corresponds to a function, and a directed edge represents
a call to the target function. 4 Each function body is
represented by a control flow graph (CFG), where each node
roughly corresponds to a statement. This graph is used to
build a simplified sparse evaluation graph (SEG) [6], which
is used by Choi et al.'s analysis in a manner similar to Wilson
[39]. As no CFG is available for library functions, a call
to a library function is modeled based on the function's semantics
with respect to pointer analysis. This hand-coded
modeling provides the benefits of context-sensitive analysis
of such calls. Library calls that cannot affect the value
of a pointer are treated as the identity transfer function
for pointer analysis. The implementation also assumes that
pointer values will only exist in pointer variables, and that
pointer arithmetic does not result in the pointer outside of
an array. All string literals are modeled as one object. The
implementation handles setjmp/longjmp in a manner similar
to Wilson [39]; all calls to setjmp are recorded and used
to determine the possible effects of a call to longjmp.
To model the values passed as argc and argv to the main
function, a dummy main function was added, which called
the benchmark's main function, simulating the effects of
3 We have found that performing Choi et al.'s analysis using
a SEG (sparse evaluation graph [6]) instead of a CFG
reduces the number of alias sets by an average of over 73%
and reduces analysis time by an average of 280% [13].
Indirect calls can result in several potential target functions
argc and argv. This function also initialized the iob array,
used for standard I/O. The added function is similar to the
one added by Ruf [29, 30] and Landi et al. [19, 17]. Explicit
and implicit initializations of global variables are automatically
modeled as assignment statements in the dummy main
function. Array initializations are expanded into an assignment
for each array component.
4. CLIENT ANALYSES
This section summarizes the client analyses used in this
study.
4.1 Mod/Ref Analysis
Mod/Ref analysis [20] determines what objects may be mod-
ified/referenced at each CFG node. This information is subsequently
used by other analyses, such as reaching definitions
and live variable analysis. This information is computed
by first visiting each CFG node and computing what
objects are modified or referenced by the node. Pointer
dereferences generate a query of the alias information to
determine the objects modified. These results (Mod and
Ref sets) are summarized for each function and used at call
sites to the function. A call site's Mod/Ref set does not
include a local of a function that cannot be on the activation
stack because its lifetime is not active. The actual
parameters at each call site are assumed to be referenced
because their value is assigned to the corresponding formal
parameter (pass-by-value semantics). The Mod/Ref analysis
makes the simplifying assumption that libraries do not
modify or reference locations indirectly through a pointer
parameter. Fixed-point iteration is employed when the program
has PCG cycles.
4.2 Live Variable Analysis
Live variable analysis [25] determines what objects may be
referenced after a program point without an intervening killing
definition. This information is useful for register allo-
cation, detecting uninitialized variables, and finding dead
assignments. The implementation, a backward analysis, directly
uses the Mod/Ref information. It associates two sets
of live variables with each CFG node representing what is
live before and after execution of the node. Sharing of such
sets is performed when a CFG node has only one successor,
or when the node acts as an identify function, i.e., it has an
empty Mod and Ref set.
All named objects in the Ref set of a CFG node become live
before that node. A named object is killed at a CFG node if
it is definitely assigned (i.e., it is the only element in the Mod
set of a noncall node) and represents one runtime object,
i.e., it is not an aggregate, a heap object, or a local/formal
of a recursive function. The implementation processes each
function once, employing a priority-based worklist of CFG
nodes for each function. It is optimistic; no named objects
are considered live initially, except at the exit node where all
nonlocals that are modified in the function are considered
to be live.
4.3 Reaching Definitions Analysis
Reaching definitions analysis [25] determines what definitions
of named objects may reach (in an execution sense) a
program point. This information is useful in computing data
dependences among statements, an important step for program
slicing [37] and code motion. The implementation, a
forward analysis, uses Mod/Ref information and associates
two sets of reaching definitions with each CFG node. Set
sharing is performed as in live variable analysis.
All named objects in the Mod set of a CFG node result in
new definitions being generated at that node. Definitions
are killed as in live variable analysis. Each function is processed
once, using a priority-based worklist of CFG nodes for
each function. The analysis is optimistic; no definitions are
initially considered reaching any point, except for dummy
definitions created at the entry node of a function for each
parameter or nonlocal that is referenced in the function.
4.4 Interprocedural Constant Propagation
The constant propagation client [26] is an optimistic interprocedural
algorithm inspired by Wegman and Zadeck's
Conditional Constant algorithm [38]. The algorithm tracks
values of variables interprocedurally throughout the program
and uses this information to simultaneously evaluate
conditional branches where possible, thereby determining
if a conditional branch will always evaluate to one value.
In addition to potentially removing unexecutable code, this
analysis can simplify computations and provide useful information
for cloning algorithms.
Because this analysis was designed to be combined with Choi
et al.'s pointer analysis [26, 27], it uses pointer information
directly, rather than using the Mod/Ref sets as was done in
reaching definitions and live variable analysis. In this work,
the constant propagation analysis is simply run after the
pointer analysis is completed. Like Choi et al.'s analysis, the
constant propagation algorithm uses nested iteration and a
SEG. 5 The algorithm extends the traditional lattice of ?,
?, and constant to include Positive, Negative, and NonZero.
This can help when analyzing C programs that treat nonzero
values as true.
5. RESULTS
This study was performed on a 333MHz IBM RS/6000 PowerPC
604e with 512MB RAM and 817MB paging space, running
AIX 4.3. The analyses were compiled with IBM's xlC
compiler using the "-O3" option. For each benchmark the
following are reported for all pointer analyses and clients:
precision, analysis time, and the maximum memory usage.
Table
describes characteristics of the benchmark suite,
which contains 23 C programs provided by other researchers
[19, 8, 29, 31] and the SPEC benchmark suites. 6 LOC is
computed using wc on the source and header files. The
column marked "Fcts", the number of user-defined func-
tions, includes the dummy main function, created to simu-
5 However, the SEG benefits are not as dramatic; most CFG
nodes are "interesting" to constant propagation, and thus,
the efficiency is typically worse than Choi et al.'s analysis.
6 The large number of CFG nodes for 129.compress results
from the explicit creation of assignment statements for implicit
array initialization. Some programs had to be syntactically
modified to satisfy C++'s stricter type-checking
semantics. A few program names are different than those
reported in [29]. Namely, ks was referred to as part, and ft
as span [30]. Also, the SPEC CINT92 program 052.alvinn
was named backprop in Todd Austin's benchmark suite [3].
Table
1: Static Characteristics of Benchmark Suite.
Ptr-Asg
CFG Nodes
Name Source LOC Nodes Fcts Pct
allroots Landi 227 159 7 1.3%
01.qbsort McCat 325 170 8 24.1%
06.matx McCat 350 245 7 13.5%
15.trie McCat 358 167 13 23.4%
04.bisect McCat 463 175 9 9.7%
fixoutput PROLANGS 477 299 6 4.4%
17.bintr McCat 496 193 17 8.8%
anagram Austin 650 346
ks Austin 782 526 14 27.4%
05.eks McCat 1,202 677
08.main McCat 1,206 793 41 20.9%
09.vor McCat 1,406 857 52 28.6%
loader Landi 1,539 691
129.compress SPEC95 1,934 17,012 25 0.2%
football Landi 2,354 2,854 58 1.8%
compiler Landi 2,360 1,767 40 5.1%
assembler Landi 3,446 1,845 52 16.6%
simulator Landi 4,639 2,929 111 6.3%
flex PROLANGS 7,659 7,107 88 5.2%
late command-line argument passing. The column marked
"Ptr-Asg Nodes Pct" reports the percentage of CFG nodes
that are considered pointer-assignment nodes, i.e., the number
of assignment nodes where the left side variable involved
in the pointer expression is declared to be a pointer.
5.1 Pointer Analysis Precision
The most direct way to measure the precision of a pointer
analysis is to record the number of objects aliased to a
pointer expression appearing in the program. Using this
metric, Andersen's and Burke et al.'s analyses provide the
same level of precision for all benchmarks, suggesting that
alias relations involving formals or locals from provably non-active
functions do not occur in this benchmark suite. Because
all client analyses use the alias solution computed by
these analysis as their input, there is, likewise, no precision
difference in these clients. For this reason, we group these
two analysis together in the precision data. The efficiency
results of Section 5.6 distinguish these analyses.
A pointer expression with multiple dereferences, such as
\Lambdap, is counted as multiple dereference expressions, one
for each dereference. The intermediate dereferences (\Lambdap and
are counted as reads. The last dereference ( \Lambdap)
is counted as a read or write depending on the context
of the expression. Statements such as (\Lambdap)++ and \Lambdap +=
increment are treated as both a read and a write of \Lambdap. A
pointer is considered to be dereferenced if the variable is declared
as a pointer or an array formal parameter, and one
or more of the " ", "-?", or "[ ]" operators are used with
that variable. Formal parameter arrays are included because
their corresponding actual parameter(s) could be a pointer.
We do not count the use of the "[ ]" operator on arrays that
are not formal parameters because the resulting "pointer"
(the array name) is constant, and therefore, counting it may
skew results.
The left half of Table 2 reports the average size of the Mod
and Ref sets for expressions containing a pointer dereference
for each benchmark and the average of all benchmarks. 7
This table, and the rest in this paper, use "-" to signify
a value that is the same as in the previous column. For
example, the Ptr-Mod for allroots is the same for Choi et
al.'s analysis and Andersen/Burke et al.'s analyses.
The results show
1. a substantial difference between the Address-taken analysis
and Steensgaard's analysis: (i) an average of 30.26
vs. 4.03 and an improvement in all benchmarks that
assigned through a pointer for Ptr-Mod, and (ii) an
average of 30.70 vs. 4.87 and an improvement in all
benchmarks for Ptr-Ref;
2. a measurable difference between Steensgaard's analysis
and Andersen/Burke et al.'s analyses: (i) an average of
4.03 vs. 2.06 and an improvement in 15 of the 22 benchmarks
that assign through a pointer for Ptr-Mod, (ii)
and an average 4.87 vs. 2.35 and an improvement in
13 of the 23 benchmarks for Ptr-Ref;
3. little difference between Andersen/Burke et al.'s analyses
and Choi et al.'s analysis: (i) an average of 2.06
vs. 2.02 and an improvement in 5 of the 22 benchmarks
that assign through a pointer for Ptr-Mod, and
(ii) 2.35 vs. 2.29 and an improvement in 5 of the 23
benchmarks for Ptr-Ref.
In summary, varying degrees of increased precision can be
gained by using a more precise analysis. However, as more
precise algorithms are used, the improvement diminishes.
5.2 Mod/Ref Precision
The right half of Table 2 reports the average Mod/Ref set
size for all CFG nodes. This captures how the pointer analysis
affects Mod/Ref analysis, which serves as input to many
other analyses. The results show
1. a substantial difference between the Address-taken analysis
and Steensgaard's analysis: (i) an average of 2.50
vs. 1.04 and an improvement in 22 of 23 benchmarks
for Mod, and (ii) 4.48 vs. 1.75 and an improvement in
all 23 benchmarks for Ref;
2. a measurable difference between Steensgaard's analysis
and Andersen/Burke et al.'s analyses: (i) an average
of 1.04 vs.87 and an improvement in 13 of 23
benchmarks for Mod, and (ii) 1.75 vs. 1.54 and an improvement
in 11 of 23 benchmarks for Ref;
3. little difference between Andersen/Burke et al.'s analyses
and Choi et al.'s analysis: (i) an average of .871
vs.867 and an improvement in 3 of 23 benchmarks for
both Mod; and (ii) an average of 1.540 vs. 1.536 and
an improvement in 4 of 23 benchmarks for Ref.
7 The modeling of potentially many runtime objects with one
representative object may seem more precise when compared
to a model that uses more names [29, 20]. For example, if the
heap was modeled as one object, all heap-directed pointers
would be "resolved" to one object in Table 2.
Table
2: Mod and Ref at pointer dereferences and all CFG nodes. No assignments through a pointer occur
in compiler.
et al.
Ptr Mod Ptr Ref Mod Ref
Name AT St A/B Ch AT St A/B Ch AT St A/B Ch AT St A/B Ch
allroots 3 2.00 1.00 - 3 2.00 1.38 - .88 .85 .83 - 1.77 1.58 1.52 -
04.bisect 14 1.15 - 14 1.00 - 2.57 .58 - 3.92 1.57 -
anagram
ks 17 1.90 1.86 1.62 17 1.79 - 1.74 1.70 .56 .55 .53 3.76 1.35 - 1.34
05.eks
09.vor 19 1.85 1.35 1.32 19 1.92 1.68 1.60 2.04 .63 .62 - 7.91 1.40 1.34 -
loader
129.compress 13 1.40 1.07 - 13 2.26 1.11 - 1.68 .80 .78 - 1.66 1.29 1.28 -
football
compiler
assembler 87 1.24 2.21 - 87 15.14 2.11 - 1.21 1.88 .87 - 15.07 4.09 1.47 -
simulator 87 3.16 2.05 - 87 3.95 1.86 - 6.82 .62 .57 - 8.21 1.21 1.06 -
flex 56 5.37 1.78 - 56 5.09 2.03 2.01 5.97 1.60 1.18 - 1.55 3.89 3.44 3.43
Average 30.26 4.03 2.06 2.02 30.70 4.87 2.35 2.29 2.50 1.04 0.871 0.867 4.48 1.75 1.540 1.536
Once again varying degrees of increased precision can be
gained by using a more precise analysis. However, the improvements
are not as dramatic as in the previous metric,
resulting in minimal precision gain from the flow-sensitive
analysis.
5.3 Live Variable Analysis and Dead Assignment
The first set of four columns in Table 3 reports precision
results for live variable analysis. For each benchmark we
list the average number of live variables at each CFG node
and the average of these averages. Live variable information
is used to find assignments to variables that are never used,
i.e., a dead assignments. The second set of four columns
gives the number of CFG nodes that are dead assignments.
The results show
1. a substantial difference between the Address-taken analysis
and Steensgaard's analysis for live variables -
on average 34.24 vs. 20.13 and an improvement in all
benchmarks - but no difference for finding dead assignments
2. a significant difference between Steensgaard's analysis
and Andersen/Burke et al.'s analyses for live variables
- an average of 20.13 vs. 18.36 and an improvement
in 13 of 23 benchmarks - but less of a difference for
finding dead assignments: an average of 1.91 vs. 1.96
and an improvement in only 1 of 23 benchmarks.
3. a small difference between Andersen/Burke et al.'s analyses
and Choi et al.'s analysis for live variables - an
average of 18.36 vs. 18.30 and an improvement in 3 of
benchmarks - but no difference for finding dead
assignments.
In summary, more precise pointer analyses improved the
precision of live variable analysis, but Choi et al.'s analysis
provided only minimal improvement. In contrast, dead
assignments identification was hardly affected by using different
pointer analyses.
5.4 Reaching Definitions andFlowDependences
The third set of four columns in Table 3 reports precision
results for reaching definitions analysis. For each benchmark
we list the average number of definitions that reach a
CFG node. The last set of four columns reports the average
number of unique flow dependences between two CFG nodes
per function. This metric captures reaching definitions that
are used at a CFG node, but counts dependences between
the same two nodes only once. Thus, if a set of variables
are potentially defined at one node and potentially used at
another node, only one dependence is counted because only
one such dependence is needed to prohibit code motion of
the two nodes or to be part of a slice.
The results show
1. a significant difference between the Address-taken analysis
and Steensgaard's analysis: (i) an average of 36.39
vs. 22.04 and an improvement in all 23 benchmarks for
reaching definitions, and (ii) an average of 52.51 vs.
44.24 and an improvement in 21 of 23 benchmarks for
flow dependences;
2. a measurable difference between Steensgaard's analysis
and Andersen/Burke et al.'s analyses: (i) an aver-
Table
3: Live variables, dead assignments, reaching definitions, and flow dependences.
Avg live variables at a Node Total dead assignments Avg reaching defs at a node Avg flow deps per function
Name AT St A/B Ch AT St A/B Ch AT St A/B Ch AT St A/B Ch
allroots
04.bisect
fixoutput
anagram
ks
05.eks
08.main
09.vor 20.33 6.96 6.85 - 2 - 23.68 7.35 7.26 - 34.92 26.52 26.20 -
loader 50.16 21.76
129.compress
compiler 43.73 43.70 -
assembler 82.24 37.36 20.75
simulator
Average 34.24 20.13 18.36
age of 22.04 vs. 20.21 and an improvement in 12 of 23
benchmarks for reaching definitions, and (ii) an average
of 44.24 vs. 43.84 and an improvement in 9 of 23
benchmarks for flow dependences;
3. a negligible difference between Andersen/Burke et al.'s
analyses and Choi et al.'s analysis for reaching definitions
- an average of 20.21 vs. 20.16 and an improvement
in 5 of 23 benchmarks - but no difference in
flow dependences for any benchmark.
In summary, each successively more precise analysis results
in an improvement of precision of reaching definitions, but
this improvement is diminished when flow dependences are
computed. In particular, there is no gain in flow dependences
precision using Choi et al.'s analysis over Ander-
sen/Burke et al.'s analyses and only minor improvements in
using Andersen/Burke et al.'s analyses over Steensgaard's
analysis.
Constant Propagation and Unexecutable
Code Detection
The constant propagation precision results are shown in Table
4. After the benchmark name the first four columns give
the number of complete expressions found to be constant.
This metric does not count subexpressions such as "b" in
=b+c;". The next four columns report the number of
unexecutable nodes found by the analysis. The results show
1. a significant difference between the Address-taken analysis
and Steensgaard's analysis: (i) an average of 7.8
vs. 10.6 constants found, but an improvement in only
3 of 22 benchmarks, and (ii) an average of 3.2 vs. 25.3
unexecutable nodes detected, but an improvement in
only 2 of 22 benchmarks;
Table
4: Constants and unexecutable CFG nodes
found.
gaard's, ``A/B''= Andersen/Burke et al.,
Choi et al. 099.go is not included because it exhausts
the 200MB heap size.
Constants Unexecutable Nodes
Name AT St A/B Ch AT St A/B Ch
allroots
04.bisect
ks
05.eks
08.main 36 -
loader
129.compress 34 - 5 -
compiler
assembler
simulator
flex
Average 7.8 10.6 10.7 - 3.2 25.3 25.4 -
2. a negligible difference between Steensgaard's analysis
and Andersen/Burke et al.'s analyses: (i) an average
of 10.6 vs. 10.7 constants found, an improvement in
only 1 of 22 benchmarks, and (ii) an average of 25.3
vs. 25.4 unexecutable nodes detected, an improvement
in only 1 of 22 benchmarks;
3. no difference between Andersen/Burke et al.'s analyses
and Choi et al.'s analysis in terms of constants found
and unexecutable nodes detected.
In summary, constant propagation and unexecutable code
detection does not seem to benefit much from increasing
precision beyond Steensgaard's analysis.
5.6 Efficiency
The efficiency of an algorithm can vary greatly depending on
the implementation [13] and therefore, care must be taken
when drawing conclusions regarding efficiency. For example,
F-ahndrich et al. [9] have demonstrated that the efficiency of
a constraint solving implementation of Andersen's algorithm
can be improved by orders of magnitude, without a loss of
precision, using partial online cycle detection and inductive
form.
Table
5 presents the analysis time in seconds of five individual
runs for each benchmark. The runs differ only in the
pointer analysis used. The times are given for the pointer
analysis, the total time for all client analyses, and the sum
of these two values. The time reported does not include
the time to build the PCG and CFGs, but does include any
analysis-specific preprocessing, such as the building of the
SEG from the CFG in Choi et al.'s analysis. The last line
gives the average for each column expressed as a ratio of the
Address-taken analysis for each category: pointer analysis,
clients, and total. For example, the average pointer analysis
time of Andersen's analysis is 29.60 times that of the
average pointer analysis time of the Address-taken analysis,
but the average of the client analyses using this information
is .84 times the average of the same client analyses using
the alias information from the Address-taken analysis. The
results show
1. the Address-taken and Steensgaard's analyses are very
fast; in all benchmarks these analyses completed in less
than a second;
2. the flow-insensitive analyses of Andersen and Burke
et al. are significantly slower (approximately
than the Address-taken and Steensgaard's analyses;
3. the flow-sensitive analysis of Choi et al.'s is on average
times slower than the Address-taken analysis and
about 2.5 times slower than the Andersen/Burke et
al.'s analyses;
4. the client analyses improved in efficiency as the pointer
information was made more precise because the input
size to these client analysis is smaller. On average this
reduction outweighed the initial costs of the pointer
analysis for Steensgaard, Andersen, and Burke et al.'s
analyses compared to the Address-taken analysis, and
brought the total time of the flow-sensitive analysis
of Choi et al.'s to within 9% of the total time of the
Address-taken analysis.
Table
6 reports the high-water mark of memory usage during
the various analyses as reported by the "ps v" command
under AIX 4.3. As before, the amounts are given for the
pointer analysis, the total memory for all client analyses,
and the sum of these two values. The last line gives the
average for each column expressed as a ratio of the Address-
taken analysis for each category: pointer analysis, clients,
and total. The results show
1. the memory consumption of the Address-taken and
Steensgaard's analyses are similar;
2. the memory consumption of the flow-sensitive analysis
of Choi et al. can be over 6 times larger than any of the
other pointer analysis (flex), and on average uses 12
times more memory than the Address-taken analysis;
3. once again, the memory usage of the client analyses
improves as the precision of pointer information
increases; on average the clients using the information
produced by Choi et al.'s analysis used the least
amount of memory, which was enough to overcome the
twelve-fold increase in pointer analysis memory consumption
over the Address-taken analysis.
6. RELATED WORK
Because of space constraints we limit this section to other
comparative studies of pointer analyses. A more thorough
treatment of related work can be found in [12, 20, 39].
Ruf [29] presents an empirical study of two algorithms: a
flow-sensitive algorithm similar to Choi et al. and a context-sensitive
version of the same algorithm. The context-sensitive
algorithm did not improve precision at pointer dereferences,
but Ruf cautioned that this may be a characteristic of the
benchmark suite.
Shapiro and Horwitz [32] present an empirical comparison
of four flow-insensitive algorithms: Address-taken, Steens-
gaard, Andersen, and a fourth algorithm [33] that can be parameterized
between Steensgaard's and Andersen's analysis.
The authors measure the precision of these analyses using
procedure-level Mod, live and truly live variables analyses,
and an interprocedural slicing algorithm. Their results suggest
that a more precise analysis will improve the precision
and efficiency of its clients, but leave as an open question
whether a flow-sensitive analysis will follow this pattern.
Landi et al. [20, 35] report precision results for the computation
of the interprocedural Mod problem using the flow-sensitive
context-sensitive analysis of Landi and Ryder [18].
They compare this analysis with an analysis [42] that is
similar to Steensgaard's analysis. They found that the more
precise analysis provided improved precision, but exhausted
memory on some programs that the less precise analysis was
able to process.
Emami et al. [8] report precision results for a flow-sensitive
context-sensitive algorithm. Ghiya and Hendren [11] empir-
Table
5: Analysis Time in Seconds
Pointer Analysis Clients Total
Name AT ST An Bu Ch AT ST An Bu Ch AT ST An Bu Ch
allroots
04.bisect
ks
05.eks
loader
129.compress
compiler
assembler
simulator
flex
Ratio to AT 1.00 0.90 29.60 32.92 79.49 1.00 0.81 0.84 0.71 0.69 1.00 0.82 0.98 0.87 1.09
Table
Memory Usage in MBs
Pointer Analysis Clients Total
Name AT ST An Bu Ch AT ST An Bu Ch AT ST An Bu Ch
allroots
04.bisect 1.01 0.61 2.25 0.00 0.50 0.89 0.91 0.55 0.66 0.60 1.90 1.52 2.80 0.66 1.10
fixoutput
anagram 0.50 0.28 1.62 0.04 0.25 1.15 1.10 0.91 0.96 1.37 1.65 1.38 2.53 1.00 1.62
ks 0.26 0.31 2.39 0.42 1.63 2.79 2.38 2.36 2.31 2.86 3.05 2.69 4.75 2.73 4.49
05.eks 0.00 0.10 2.04 0.18 0.75 2.13 1.86 1.69 1.72 1.66 2.13 1.96 3.73 1.90 2.41
08.main 0.19 0.00 3.39 0.76 2.72 2.66 1.89 1.47 1.43 1.42 2.85 1.89 4.86 2.19 4.14
loader
129.compress
compiler 0.61 1.22 4.20 1.05 2.19 14.60 15.03 14.11 13.80 13.72 15.21 16.25 18.31 14.85 15.91
assembler
simulator
Ratio to AT 1.00 1.15 8.52 3.15 12.19 1.00 0.81 0.77 0.71 0.70 1.00 0.82 0.89 0.74 0.87
ically demonstrate how a version of points-to [8] and connection
analyses [10] can improve traditional transformations,
array dependence testing, and program understanding.
Wilson and Lam [40, 39] present a context-sensitive algorithm
that avoids redundant analyses of functions for similar
calling contexts. The algorithm distinguishes structure
components and handles pointer arithmetic. Wilson [39]
compares various levels of context-sensitivity and describes
how dependence analysis uses the computed information to
parallelize loops in two SPEC benchmarks.
Diwan et al. [7] examine the effectiveness of three type-based
flow-insensitive analyses for a type-safe language (Modula-
3). The first two algorithms rely on type declarations. The
third considers assignments in a manner similar to Steens-
gaard's analysis, but retains declared type information. They
evaluate the effect of these algorithms on redundant load
elimination using statical, dynamic, and upper bound met-
rics. They conclude that for type-safe languages such as
Modula-3 or Java, a fast and simple type-based analysis
may be sufficient.
In an earlier paper [13], we describe an empirical comparison
of four context-insensitive pointer algorithms: three described
in this paper (Choi et al., Burke et al., Address-
taken) and a flow-insensitive algorithm that uses precomputed
kill information [4, 12]. No alias analysis clients are
studied. The paper also quantifies analysis-time speed-up of
various implementation techniques for Choi et al.'s analysis.
Yong et al. [41] present a tunable pointer-analysis framework
for handling structures in the presence of casting. They provide
experimental results from four instances of the frame-work
using a flow- and context-insensitive algorithm, which
appears to be similar to Andersen's algorithm. Their results
show that for this pointer algorithm distinguishing struct
components can improve precision where pointers are dereferenced
(the metric used in Section 5.1). They do not address
how this affects the precision of client analyses or if
similar results hold for other pointer analyses.
Liang and Harrold [21] describe a context-sensitive flow-insensitive
algorithm and empirically compare it to three
other algorithms: Steensgaard, Andersen, and Landi and
Ryder [18], using Ptr-Mod (Section 5.1), summary edges in
a system dependence graph, and average slice size as precision
metrics. They demonstrate performance and precision
mostly between Andersen's and Steensgaard's algorithms.
None of the implementations handles function pointers or
setjmp/longjmp.
7. CONCLUSIONS
This paper describes an empirical study of the precision and
efficiency of five pointer analyses and typical clients of the
alias information they compute. The major conclusions are
ffl Steensgaard's analysis is significantly more precise than
the Address-taken analysis without an appreciable increase
in compilation time or memory usage, and therefore
should always be preferred over the Address-taken
analysis.
ffl The flow-insensitive analysis of Andersen and Burke et
al. provide the same level of precision. Both analyses
offer a modest increase in precision over Steensgaard's
analysis. Although this improvement requires additional
pointer analysis time, it is typically offset by
decreasing the input size (the alias information) and
analysis time of subsequent analyses. There is not a
clear distinction in analysis time or memory usage between
the implementations of these analyses.
ffl The use of flow-sensitive pointer analysis (as described
in this paper) does not seem justified because it offers
only a minimum increase in precision over the analyses
of Andersen and Burke et al. using a direct metric
(such as ptr-mod/ref) and little or no precision improvement
in client analyses.
ffl The time and space efficiency of the client analyses
improved as the pointer analysis precision increased
because the increase in precision reduced the input to
these client analysis.
8.
ACKNOWLEDGMENTS
We thank Vivek Sarkar for his support of this work and
NPIC group members who have assisted with the implemen-
tation. We also thank Todd Austin, Bill Landi, and Laurie
Hendren for making their benchmarks available. We thank
Frank Tip, Laureen Treacy, and the anonymous referees for
comments on an earlier draft of this work. This work was
supported in part by the National Science Foundation under
grant CCR-9633010, by IBM Research, and by SUNY
at New Paltz Research and Creative Project Awards.
9.
--R
Program Analysis and Specialization for the C Programming Language.
Effective whole-program analysis in the pressence of pointers
Efficient flow-sensitive interprocedural computation of pointer-induced aliases and side effects
Automatic construction of sparse data flow evaluation graphs.
Partial online cycle elimination in inclusion constraint graphs.
Connection analysis: A practical interprocedural heap analysis for C.
Putting pointer analysis to work.
Interprocedural pointer alias analysis.
Assessing the effects of flow-sensitivity on pointer alias analyses
Traveling through Dakota: Experiences with an object-oriented program analysis system
The architecture of Montana: An open and extensible programming environment with an incremental C
Undecidability of static analysis.
Personal communication
A safe approximate algorithm for interprocedural pointer aliasing.
Interprocedural modification side effect analysis with pointer aliasing.
A schema for interprocedural modification side-effect analysis with pointer aliasing
Efficient points-to analysis for whole-program analysis
Defining flow sensitivity in data flow problems.
Static Analysis for a Software Transformation Tool.
Advanced Compiler Design and Imlementation.
Conditional pointer aliasing and constant propagation.
Combining interprocedural pointer analysis and conditional constant propagation.
The undecidability of aliasing.
Personal communication
The effects of the precision of pointer analysis.
Fast and accurate flow-insensitive point-to analysis
Comparing flow and context sensitivity on the modifications-side-effects problem
Data structures and network flow algorithms.
A survey of program slicing techniques.
Constant propagation with conditional branches.
Efficient Context-Sensitive Pointer Analysis for C Programs
Efficient context-sensitive pointer analysis for C programs
Pointer analysis for programs with structures and casting.
Program decomposition for pointer aliasing: A step toward practical analyses.
--TR
Data structures and network algorithms
Automatic construction of sparse data flow evaluation graphs
Constant propagation with conditional branches
A safe approximate algorithm for interprocedural aliasing
Interprocedural modification side effect analysis with pointer aliasing
Efficient flow-sensitive interprocedural computation of pointer-induced aliases and side effects
Undecidability of static analysis
Pointer-induced aliasing
Context-sensitive interprocedural points-to analysis in the presence of function pointers
The undecidability of aliasing
Efficient context-sensitive pointer analysis for C programs
Context-insensitive alias analysis reconsidered
Points-to analysis in almost linear time
Program decomposition for pointer aliasing
Connection analysis
Fast and accurate flow-insensitive points-to analysis
Putting pointer analysis to work
Comparing flow and context sensitivity on the modification-side-effects problem
Partial online cycle elimination in inclusion constraint graphs
Type-based alias analysis
Advanced compiler design and implementation
Static analysis for a software transformation tool
Effective whole-program analysis in the presence of pointers
The architecture of Montana
Pointer analysis for programs with structures and casting
Efficient points-to analysis for whole-program analysis
Interprocedural pointer alias analysis
Flow-Insensitive Interprocedural Alias Analysis in the Presence of Pointers
The Effects of the Presision of Pointer Analysis
Assessing the Effects of Flow-Sensitivity on Pointer Alias Analyses
Traveling Through Dakota
Efficient, context-sensitive pointer analysis for c programs
--CTR
Mana Taghdiri , Robert Seater , Daniel Jackson, Lightweight extraction of syntactic specifications, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
Andreas Zeller, Isolating cause-effect chains from computer programs, ACM SIGSOFT Software Engineering Notes, v.27 n.6, November 2002
Ran Shaham , Elliot K. Kolodner , Mooly Sagiv, Heap profiling for space-efficient Java, ACM SIGPLAN Notices, v.36 n.5, p.104-113, May 2001
Andreas Zeller, Isolating cause-effect chains from computer programs, Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering, November 18-22, 2002, Charleston, South Carolina, USA
Jens Krinke, Effects of context on program slicing, Journal of Systems and Software, v.79 n.9, p.1249-1260, September 2006
Ondrej Lhotk, Comparing call graphs, Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.37-42, June 13-14, 2007, San Diego, California, USA
Markus Mock , Manuvir Das , Craig Chambers , Susan J. Eggers, Dynamic points-to sets: a comparison with static analyses and potential applications in program understanding and optimization, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.66-72, June 2001, Snowbird, Utah, United States
Jamieson M. Cobleigh , Lori A. Clarke , Leon J. Osterweil, The right algorithm at the right time: comparing data flow analysis algorithms for finite state verification, Proceedings of the 23rd International Conference on Software Engineering, p.37-46, May 12-19, 2001, Toronto, Ontario, Canada
Haifeng He , John Trimble , Somu Perianayagam , Saumya Debray , Gregory Andrews, Code Compaction of an Operating System Kernel, Proceedings of the International Symposium on Code Generation and Optimization, p.283-298, March 11-14, 2007
Esther Salam , Mateo Valero, Dynamic memory interval test vs. interprocedural pointer analysis in multimedia applications, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.2, p.199-219, June 2005
Donglin Liang , Maikel Pennings , Mary Jean Harrold, Extending and evaluating flow-insenstitive and context-insensitive points-to analyses for Java, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.73-79, June 2001, Snowbird, Utah, United States
David J. Pearce , Paul H. J. Kelly , Chris Hankin, Efficient field-sensitive pointer analysis for C, Proceedings of the ACM-SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, June 07-08, 2004, Washington DC, USA
Donglin Liang , Mary Jean Harrold, Equivalence analysis and its application in improving the efficiency of program slicing, ACM Transactions on Software Engineering and Methodology (TOSEM), v.11 n.3, p.347-383, July 2002
Thomas Eisenbarth , Rainer Koschke , Gunther Vogel, Static object trace extraction for programs with pointers, Journal of Systems and Software, v.77 n.3, p.263-284, September 2005
Ana Milanova , Atanas Rountev , Barbara G. Ryder, Parameterized object sensitivity for points-to and side-effect analyses for Java, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Brian Hackett , Alex Aiken, How is aliasing used in systems software?, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
Bolei Guo , Matthew J. Bridges , Spyridon Triantafyllis , Guilherme Ottoni , Easwaran Raman , David I. August, Practical and Accurate Low-Level Pointer Analysis, Proceedings of the international symposium on Code generation and optimization, p.291-302, March 20-23, 2005
Jeff Da Silva , J. Gregory Steffan, A probabilistic pointer analysis for speculative optimizations, ACM SIGPLAN Notices, v.41 n.11, November 2006
Gregor Snelting , Frank Tip, Understanding class hierarchies using concept analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.22 n.3, p.540-582, May 2000
Charles N. Fischer, Interactive, scalable, declarative program analysis: from prototype to implementation, Proceedings of the 9th ACM SIGPLAN international symposium on Principles and practice of declarative programming, July 14-16, 2007, Wroclaw, Poland
Ana Milanova , Atanas Rountev , Barbara G. Ryder, Parameterized object sensitivity for points-to analysis for Java, ACM Transactions on Software Engineering and Methodology (TOSEM), v.14 n.1, p.1-41, January 2005
Jianwen Zhu , Silvian Calman, Symbolic pointer analysis revisited, ACM SIGPLAN Notices, v.39 n.6, May 2004
Markus Mock , Darren C. Atkinson , Craig Chambers , Susan J. Eggers, Program Slicing with Dynamic Points-To Sets, IEEE Transactions on Software Engineering, v.31 n.8, p.657-678, August 2005
Chris Lattner , Vikram Adve, LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.75, March 20-24, 2004, Palo Alto, California
Martin Hirzel , Daniel Von Dincklage , Amer Diwan , Michael Hind, Fast online pointer analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.2, p.11-es, April 2007
Michael Hind, Pointer analysis: haven't we solved this problem yet?, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.54-61, June 2001, Snowbird, Utah, United States
David J. Pearce , Paul H. J. Kelly , Chris Hankin, Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis, Software Quality Control, v.12 n.4, p.311-337, December 2004
Baowen Xu , Ju Qian , Xiaofang Zhang , Zhongqiang Wu , Lin Chen, A brief survey of program slicing, ACM SIGSOFT Software Engineering Notes, v.30 n.2, March 2005 | interprocedural pointer analysis;data flow analysis |
348938 | Simplifying failure-inducing input. | Given some test case, a program fails. Which part of the test case is responsible for the particular failure? We show how our delta debugging algorithm generalizes and simplifies some failing input to a minimal test case that produces the failure.In a case study, the Mozilla web browser crashed after 95 user actions. Our prototype implementation automatically simplified the input to 3 relevant user actions. Likewise, it simplified 896~lines of HTML to the single line that caused the failure. The case study required 139 automated test runs, or 35 minutes on a 500 MHz PC. | INTRODUCTION
Often people who encounter a bug spend a lot of time
investigating which changes to the input file will make the bug go
away and which changes will not affect it.
- Richard Stallman, Using and Porting GNU CC
The Mozilla engineers faced imminent doom. In July 1999, more
than 370 open bug reports were stored in the bug data base, ready
to be simplified. "Simplifying" meant: turning these bug reports
into minimal test cases, where every part of the input would be significant
in reproducing the failure. Overwhelmed with work, the
engineers sent out the Mozilla BugAThon call for volunteers that
would help them process bug reports: For 5 bug reports simpli-
fied, a volunteer would be rewarded with an invitation to the launch
would earn him a T-shirt signed by the grateful engineers
[9].
Decomposing specific bug reports into simple test cases does not
only trouble the engineers of Mozilla, Netscape's open source web
browser project [8]. The problem arises from generally conflicting
issues: A bug report must be as specific as possible, such that the
engineer can recreate the context in which the program failed. On
the other hand, a test case must be as simple as possible, because
a minimal test case implies a most general context. Thus, a minimal
test case not only allows for short problem descriptions and
valuable problem insights, but it also subsumes several current and
future bug reports.
The striking thing about test case simplification is that no one so
far has thought to automate this task. Several textbooks and guides
about debugging are available that tell how to use binary search in
order to isolate the problem-based on the assumption that the test
is carried out manually, too. With an automated test, however, we
can also automate test case simplification.
This is what we describe in this paper. Our delta debugging algorithm
ddmin is fed with a test case, which it simplifies by successive
testing. ddmin stops when a minimal test case is reached, where removing
any single input entity would cause the failure to disappear.
In general, ddmin requires a time of O(n 2 ) given an input of n en-
tities. A well-structured input leads to better performance: in the
best case, where a single input entity causes the failure, ddmin requires
logarithmic time to find the entity. ddmin can be tailored
with language-specific knowledge.
We begin with a discussion of the problem and the basic ddmin
algorithm. Using a number of real-life failures, we show how the
ddmin algorithm detects failure-inducing input and how this test
case is isolated and simplified. We close with discussions of related
and future work.
2. CONFIGURATIONS AND TESTS
Ian Hickson stayed up until 5:40 a.m.
and simplified bugs the first night of the BugAThon.
Mozilla BugAThon call
Let us begin with some basic definitions. First of all, what does a
"minimal" test case mean?
For every program, there is some smallest possible input that induces
a well-defined behavior which does not qualify as a failure.
Typically, this is the empty input, or something very close. Here are
some examples:
. A C compiler accepts an empty translation unit (= an empty
C file) as smallest possible input.
. When given an empty input, a WWW browser is supposed to
produce a defined error message.
. When given an empty input file, the L A T E X typesetting system
is supposed to produce an error message.
It should be noted that the smallest possible input is not necessarily
the smallest valid input; even an invalid input is possible as long as
the program does not fail.
Let us now view a failure-inducing input C as the result of applying
a number of changes # 1 , # 2 , . , # n to the minimal possible
input. This way, we have a gradual transition from the minimal
possible input (= no changes applied) to C (= all changes applied).
We deliberately do not give a formal definition of a change here.
In general, a # i can stand for any change in the circumstances that
influences the execution of the program. In our previous work, for
instance, we had modeled # i as changes to the program code [15].
In this paper, we search for failure-inducing circumstances in the
program input; hence, a change is any operation that is applied on
the input. The only important thing is that applying all changes
results in the failure-inducing set C .
In the case studies presented in this paper, we have always chosen
changes as a lexical decomposition of the failure-inducing input.
That is, each # i stands for a lexical entity that can be present (the
change is applied) or not (the change is not applied). As an ex-
ample, consider a minimal possible input which is empty, and a
failure-inducing input consisting of n lines of text. Each change # i
would add the i -th line to the empty input, such that applying all
changes results in the full set of lines. Modeling changes as lexical
decomposition is the easiest approach, but the model can easily
extend to other notions of changes.
Still treating changes as given entities, let us now formally define
tests and test cases. We can describe any test case between the
minimal possible input and C as a configuration of changes:
(Test case) Let be a set of
changes # i . A change set c # C is called a test case. 1
A test case is constructed by applying changes to the minimal possible
input:
possible input) An empty test case
is called the minimal possible input.
We do not impose any constraints on how changes may be com-
bined; in particular, we do not assume that changes are ordered. In
the worst case, there are 2 n possible test cases for n changes.
To determine whether a test case induces a failure, we assume a
testing function. According to the POSIX 1003.3 standard for testing
frameworks [5], we distinguish three outcomes:
1 The definitions in this section are adapted from our previous
work [15]. See Section 8 for a discussion.
. The test succeeds (PASS, written here as
. The test has produced the failure it was intended to capture
here as
. The test produced indeterminate results
(UNRESOLVED, written here as ). 2
Definition 3 (Test) The function test
# {8, 4, } determines
for a test case c # C whether some given failure occurs (8)
or not (4) or whether the test is unresolved ( ).
In practice, test would construct the test case by applying the given
changes to the minimal possible input, feed the test case to a program
and return the outcome.
Let us now model our initial scenario. We have some minimal possible
input that works fine and some test case that fails:
Axiom 4 (Failing test case) The following holds:
.
. ("failing test case").
Our goal is now to simplify the failing test case C-that is, to minimize
it. A test case c being "minimal" means that no subset of c
causes the test to fail. Formally:
Definition 5 (Minimal test case) A test case c # C is minimal if
holds.
This is what we want: minimizing a test case C such that all parts
are significant in producing the failure-nothing can be removed
without making the failure disappear.
3. MINIMALITY OF TEST CASES
A simplified test case means the simplest possible web page that
still reproduces the bug. If you remove any more characters from
the file of the simplified test case, you no longer see the bug.
Mozilla BugAThon call
How can one actually determine a minimal test case? Here comes
bad news. Let there be some test case c consisting of |c| changes
(characters, lines, functions inserted) to the minimal input. Relying
on test alone to determine minimality requires testing all 2 |c| - 1
true subsets of c, which obviously has exponential complexity. 3
What we can determine, however, is an approximation-for in-
stance, a test case where every part on its own is still significant
in producing the failure, but we do not check whether removing
several parts at once might make the test case even smaller. For-
mally, we define this property as 1-minimality, where n-minimality
is defined as:
lists UNTESTED and UNSUPPORTED outcomes,
which are of no relevance here.
3 To be precise, Axiom 4 tells us the result of test(#), such that only
subsets need to be tested, but this does not help much.
Minimizing Debugging Algorithm
The minimizing delta debugging algorithm ddmin(c) is
("reduce to subset")
("reduce to complement")
c otherwise ("done").
c such that # c are pairwise disjoint, #c i (|c i | # |c|/n), as well as
The recursion invariant (and thus precondition) for ddmin 2 is
Figure
1: Minimizing delta debugging algorithm
Definition 6 (n-minimal test case) A test case c # C is n-minimal
if
holds.
A failing test case c composed of |c| lines would thus be 1-minimal
if removing any single line would cause the failure to disappear;
likewise, it would be 3-minimal if removing any combination of
three or less lines would make it work again. If c is |c|-minimal,
then c is minimal in the sense of Definition 5.
Definition 6 gives a first idea of what we should be aiming at. How-
ever, given, say, a 100,000 line test case, we cannot simply remove
each individual line in order to minimize it. Thus, we need an effective
algorithm to reduce our test case efficiently.
4. A MINIMIZING ALGORITHM
Proceed by binary search. Throw away half the input and see if
the output is still wrong; if not, go back to the previous state and
discard the other half of the input.
Brian Kernighan and Rob Pike, The Practice of Programming
What do humans do in order to minimize test cases? They use
binary search. If c contains only one change, then c is minimal by
definition. Otherwise, we partition c into two subsets c 1 and c 2
with similar size and test each of them. This gives us three possible
outcomes:
Reduce to c 1 . The test of c 1 fails-c 1 is a smaller test case.
Reduce to c 2 . The test of c 2 fails-c 2 is a smaller test case.
Ignorance. Both tests pass, or are unresolved-neither c 1 nor c 2
qualify as possible simplifications.
In the first two cases, we can simply continue the search in the failing
subset, as illustrated in Table 1. Each line of the diagram shows
a configuration. A number i stands for an included change # i ; a
dot stands for an excluded change. Change 7 is the minimal failing
test case-and it is isolated in just a few steps.
Given sufficient knowledge about the nature of our input, we can
certainly partition any test case into two subsets such that at least
one of them fails the test. But what if this knowledge is insufficient,
or not present at all?
Let us begin with the worst case: after splitting up c into subsets, all
tests pass or are unresolved-ignorance is complete. All we know
is that c as a whole is failing. How do we increase our chances of
getting a failing subset?
. By testing larger subsets of C , we increase the chances that
the test fails-the difference from C is smaller. On the other
hand, a smaller difference means a slower progression-the
test case is not halved, but reduced by a smaller amount.
. By testing smaller subsets of C , we get a faster progression
in case the test fails. On the other hand, the chances that the
test fails are smaller.
These specific methods can be combined by partitioning c into a
larger number of subsets and testing each (small) c i as well as its
(large) complement
c i -until each subset contains only one change,
which gives us the best chance to get a failing test case. The disad-
vantage, of course, is that more subsets means more testing.
This is what can happen. Let n be the number of subsets c 1 , . , c n .
Testing each c i and its complement
possible outcomes (Figure 1):
Reduce to subset. If testing any c i fails, then c i is a smaller test
case. Continue reducing c i with subsets.
Configuration test
. 7 .
Table
1: Quick minimization of test cases
Configuration test
Testing c 1 , c 2
Increase granularity
Testing c 1 , . , c 4
Testing complements
Reduce to
test carried out in an earlier step
Testing complements
Reduce to
Increase granularity
Testing c 1 , . , c 4
c 1 . 2 . 7 8 Testing complements
Reduce to
22 c 1 1 . # Testing c 1 , . , c 3
Testing complements
26
Table
2: Minimizing a test case with increasing granularity
This reduction rule results in a classical "divide and conquer"
approach. If one can identify a smaller part of the test case
that is failure-inducing on its own, then this rule helps in narrowing
down the test case efficiently.
Reduce to complement. If testing any
c i is a smaller
test case. Continue reducing
Why do we continue with n - 1 and not two subsets here?
Because splitting
subsets means that the subsets
of
c i are identical to the subsets c i of c-in other words,
every subset of c eventually gets tested. If we continued with
two subsets from, say, would have to work our
way down with . , but only with would
the next subset of c be tested.
Increase granularity. Otherwise (that is, no test failed), try again
with 2n subsets. (Should 2n > |c| hold, try again with |c|
subsets instead, each containing one change.) This results in
at most twice as many tests, but increases chances for failure.
The process is repeated until granularity can no longer be increased
(that is, the next n would be larger than |c|). In this case, we have already
tried removing every single change individually without further
failures: the resulting change set is minimal.
As an example, consider Table 2, where the minimal test case consists
of the changes 1, 7, and 8. Any test case that includes only a
subset of these changes results in an unresolved test outcome; a test
case that includes none of these changes passes the test.
We begin with partitioning the total set of changes in two halves-
but none of them passes the test. We continue with granularity
increased to 4 subsets (Step 3-6). When testing the complements,
the set
removing changes 3 and 4. We continue with
splitting
c 2 in three subsets. The next three tests (Steps 9-11) have
already been carried out and need not be repeated (marked with # ).
When testing
can be eliminated. We
increase granularity to 4 subsets and test each (Steps 16-19), before
the last complement
eliminates change 2. Only
changes 1, 7, and 8 remain; Steps 25-27 show that none of these
changes can be eliminated. To minimize this test case, a total of
19 different tests was required.
We close with some formal properties of ddmin. First, ddmin eventually
returns a 1-minimal test case:
Proposition 7 (ddmin minimizes) For any c # C , ddmin(c) is 1-
minimal in the sense of definition 6.
PROOF. According to the ddmin definition (Figure 1), ddmin(c)
returns c only if n # |c| and test(
all subsets of c # c
with |c| - |c # are in {
c n } and test(
the condition of definition 6 applies and c is 1-minimal.
In the worst case, ddmin takes 3|c|
Proposition 8 (ddmin complexity, worst case) The number of tests
carried out by ddmin(c) is 3|c| in the worst case.
PROOF. The worst case can be divided in two phases: First,
every test is inconsistent until testing only the
last complement results in a failure until holds.
. In the first phase, every test is inconsistent. This results in a
re-invocation of ddmin 2 with a doubled number of subsets,
1. The number of tests to be carried out is
. In the second phase, the worst case is testing the last complement
c n fails, and ddmin 2 is re-invoked with ddmin 2 (
1). This results in |c| - 1 calls of ddmin, with two tests per
call, or 2(|c| -
The overall number of tests is thus 4|c|+|c| 2
In practice, however, it is unlikely that an n-character input requires
tests. The "divide and conquer" rule of ddmin takes care
of quickly narrowing down failure-inducing parts of the input:
Proposition 9 (ddmin complexity, best case) If there is only one
failure-inducing change # i # c, and all test cases that include # i
cause a failure as well, then the number of tests t is limited by
PROOF. Under the given conditions, # i must always be in either
c 1 or c 2 , whose test will fail. Thus, the overall complexity is
that of a binary search.
Whether this "best case" efficiency applies depends on our ability
to break down the input into smaller chunks that result in determined
(or better: failing) test outcomes. Consequently, the more
knowledge about the structure of the input we have, the better we
can identify possibly failure-inducing subsets, and the better is the
overall performance of ddmin.
The surprising thing, however, is that even with no knowledge about
the input structure at all, the ddmin algorithm has sufficient per-
formance-at least in the case studies we have examined. This is
illustrated in the following three sections.
5. CASE STUDY:
GCC GETS A FATAL SIGNAL
None of us has time to study a large program
to figure out how it would work if compiled correctly,
much less which line of it was compiled wrong.
- Richard Stallman, Using and Porting GNU CC
Let us now turn to some real-life input. The C program in Figure
not only demonstrates some particular nasty aspects of the
language, it also causes the GNU C compiler (GCC) to crash-at
least, when using version 2.95.2 on Intel-Linux with optimization
enabled. Before crashing, GCC grabs all available memory for its
stack, such that other processes may run out of resources and die. 4
The latter can be prevented by limiting the stack memory available
to GCC, but the effect remains:
4 The authors deny any liability for damage caused by repeating this
experiment.
#define SIZE 20
double mult(double z[], int n)
{ int i ,
for {
return z[n];
void copy(double to[], double from[], int count)
{ int
switch (count % do {
case 0:
case 7:
case
case 5:
case 4:
case 3:
case 2:
case 1:
} while (-n > 0);
return mult(to, 2);
int main(int argc, char *argv[])
{ double x[SIZE], y[SIZE];
double
while (px < x
return copy(y, x , SIZE);
Figure
2: The bug.c program that crashes GNU CC
gcc: Internal compiler error:
program cc1 got fatal signal 11
The GCC error message (and the resulting core dump) help GCC
maintainers only; as ordinary users, we must now narrow down the
failure-inducing input in bug.c-and minimize bug.c in order to
file in a bug report.
In the case of GCC, the minimal test input is the empty input. For
the sake of simplicity, we modeled a change as the insertion of a
single character. This means that
. each change c i becomes the i -th character of bug.c
. C becomes the entire failure-inducing input bug.c
. partitioning C means partitioning the input into parts.
was made to exploit syntactic or semantic knowledge
about C programs; consequently, we expected a large number
input
size
tests executed
tcmin log
bug.c
Figure
3: Minimizing GCC input bug.c
of test cases to be invalid C programs.
To minimize bug.c, we implemented the ddmin algorithm of Figure
1 into our WYNOT prototype 5 . The test procedure would create
the appropriate subset of bug.c, feed it to GCC, return 8 iff GCC
had crashed, and 4 otherwise. The results of this WYNOT run are
shown in Figure 3.
After the first two tests, WYNOT has already reduced the input size
from 755 characters to 377 and 188 characters, respectively-the
test case now only contains the mult function. Reducing mult, how-
ever, takes time: only after 731 more tests (and 34 seconds) 6 do we
get a test case that can not be minimized any further. Only 77 characters
are left:
t(double z[],int n){int i ,
0);}return z[n];}
This test case is 1-minimal-no single character can be removed
without removing the failure. Even every single superfluous white-space
has been removed, and the function name has shrunk from
mult to a single t . (At least, we now know that neither whitespace
nor function name were failure-inducing!)
Figure
4 shows an excerpt from the bug.c test log. (The character
indicates an omitted character with regard to the minimized in-
put.) We see how the ddmin algorithm tries to remove every single
change (= character) in order to minimize the input even further-
but every test results in a syntactically invalid program.
t(double z[],int n){int i, j
t(double z[],int n){int i, j
t(double z[],int n){int i, j
t(double z[],int n){int i, j
t(double z[],int n){int i, j
t(double z[],int n){int i, j
t(double z[],int n){int i, j
Figure
4: Excerpt from the bug.c test log
As GCC users, we can now file this in as a minimal bug report. But
where in GCC does the failure actually occur? We already know
"Worked Yesterday, NOt Today"
6 All times were measured on a Linux PC with a 500 MHz Pentium
III processor. The time given is the CPU user time of our
WYNOT prototype as measured by the UNIX kernel; it includes all
spawned child processes (such as the GCC run in this example).100 1
options
tests executed
tcmin log
GCC Options
Figure
5: Minimizing GCC options
that the failure is associated with optimization. Could it be possible
to influence optimization in a way that the failure disappears?
The GCC documentation lists 31 options that can be used to influence
optimization on Linux, shown in Table 3. It turns out that
applying all of these options causes the failure to disappear:
-fno-defer-pop .-fstrict-aliasing bug.c
This means that some option(s) in the list prevent the failure. We
can use test case minimization in order to find the preventing op-
tion(s). This time, each c i stands for a GCC option from Table 3.
Since we want to find an option that prevents the failure, the test
outcome is inverted: test returns 4 if GCC crashes and 8 if GCC
works fine.
This WYNOT run is a straight-forward "divide and conquer" search,
shown in Figure 5. After 7 tests (and less than a second), the single
option -ffast-math is found which prevents the failure:
Unfortunately, the -ffast-math option is a bad candidate for working
around the failure, because it may alter the semantics of the
program. We remove -ffast-math from the list of options and make
another WYNOT run. Again after 7 tests, it turns out the option
-fforce-addr also prevents the failure:
-ffloat-store -fno-default-inline -fno-defer-pop
-fforce-mem -fforce-addr -fomit-frame-pointer
-fno-inline -finline-functions -fkeep-inline-functions
-fkeep-static-consts -fno-function-cse -ffast-math
-fstrength-reduce -fthread-jumps -fcse-follow-jumps
-fcse-skip-blocks -frerun-cse-after-loop -frerun-loop-opt
-fgcse -fexpensive-optimizations -fschedule-insns
-ffunction-sections -fdata-sections
-fcaller-saves -funroll-loops -funroll-all-loops
-fmove-all-movables -freduce-all-givs -fno-peephole
-fstrict-aliasing
Table
3: GCC optimization options
input
size
tests executed
tcmin log
flex t16
Figure
Minimizing FLEX fuzz input
Are there any other options that prevent the failure? Running GCC
with the remaining 29 options shows that the failure is still there;
so it seems we have identified all failure-preventing options. And
this is what we can send to the GCC maintainers:
1. The minimal test case
2. "The failure occurs only with optimization."
3. "-ffast-math and -fforce-addr prevent the failure."
Still, we cannot identify a place in the GCC code that causes the
problem. On the other hand, we have identified as many failure circumstances
as we can. In practice, program maintainers can easily
enhance their automated regression test suites such that the failure
circumstances are automatically simplified for any failing test case.
6. CASE STUDY: MINIMIZING FUZZ
If you understand the context in which a problem occurs,
you're more likely to solve the problem completely
rather than only one aspect of it.
Steve McConnell, Code Complete
In a classical experiment [6, 7], Bart Miller and his team examined
the robustness of UNIX utilities and services by sending them fuzz
input-a large number of random characters. The studies showed
that, in the worst case, 40% of the basic programs crashed or went
into infinite loops when being fed with fuzz input.
We wanted to know how well the ddmin algorithm performs in minimizing
the fuzz input sequences. We examined a subset of the
UNIX utilities listed in Miller's paper: NROFF (format documents
for display), TROFF (format documents for typesetter), FLEX (fast
lexical analyzer generator), CRTPLOT (graphics filter for various
plotters), UL (underlining filter), and UNITS (convert quantities).
We set up 16 different fuzz inputs, differing in size (10 3 to 10 6
characters) and content (whether all characters or only printable
characters were included, and whether NUL characters were included
or not). As shown in Table 4, Miller's results still apply-at
least on Sun's Solaris 2.6 operating system: out of test
runs, the utilities crashed 42 times (43%).1010001000000 5
input
size
tests executed
tcmin log
crtplot test t16
Figure
7: Minimizing CRTPLOT fuzz input
We applied our WYNOT tool in all 42 cases to minimize the failure-
inducing fuzz input. Table 5 shows the resulting input sizes; Table 6
lists the number of tests required. 7 Depending on the crash cause,
the programs could be partitioned into two groups:
. The first group of programs shows obvious buffer overrun
problems.
- FLEX, the most robust utility, crashes on sequences of
2,121 or more non-newline and non-NUL characters
UL crashes on sequences of 516 or more printable non-
newline characters (t 5 -t 8 , t 13 -t
UNITS crashes on sequences of 77 or more 8-bit characters
Figure
6 shows the first 500 tests of the WYNOT run for FLEX
and t 16 . After 494 tests, the remaining size of 2,122 characters
is already close to the final size; however, it takes more
than 10,000 further tests to eliminate one more character.
. The second group of programs appears vulnerable to random
commands.
- NROFF and TROFF crash
# on malformed commands like "\\DJ%0F" 8
# on 8-bit input such as "\302\n" (TROFF,
CRTPLOT crashes on the one-letter inputs "t"
The WYNOT run for CRTPLOT and t 16 is shown in Figure 7.
It takes 24 tests to minimize the fuzz input of 10 6 characters
to the single failure-inducing character.
Again, all test runs can be (and have been) entirely automated. This
allows for massive automated stochastic testing, where programs
are fed with fuzz input in order to reveal defects. As soon as a
failure is detected, input minimization can generalize the large fuzz
input to a minimal bug report.
Table
6 also includes repeated tests which have been carried out in
earlier steps. On the average, the number of actual (non-repeated)
tests is 30% smaller.
8 All input is shown in C string notation.
test passed (4), 8
Table
4: Test outcomes of UNIX utilities subjected to fuzz input
Name
Character range all printable all printable
NUL characters yes yes no no
Table
5: Size of minimized failure-inducing fuzz input
7. CASE STUDY:
MOZILLA CANNOT PRINT
When you've cut away as much HTML, CSS, and JavaScript as you
can, and cutting away any more causes the bug to disappear,
you're done.
Mozilla BugAThon call
As a last case study, we wanted simplify a real-world Mozilla test
case and thus contribute to the Mozilla BugAThon. A search in
Bugzilla, the Mozilla bug database, shows us bug #24735, reported
by anantk@yahoo.com:
Ok the following operations cause mozilla to crash consistently
on my machine
# Go to bugzilla.mozilla.org
Select search for bug
# Print to file setting the bottom and right margins to .50
use the file /var/tmp/netscape.ps)
Once it's done printing do the exact same thing again
on the same file (/var/tmp/netscape.ps)
# This causes the browser to crash with a segfault
In this case, the Mozilla input consists of two items: The sequence
of input events-that is, the succession of mouse motions, pressed
keys, and clicked buttons-and the HTML code of the erroneous
WWW page. We used the XLAB capture/replay tool [13] to run
Mozilla while capturing all user actions and logging them to a file.
We could easily reproduce the error, creating an XLAB log with
711 recorded X events. Our WYNOT tool would now use XLAB to
replay the log and feed Mozilla with the recorded user actions, thus
automating Mozilla execution.
In a first run, we wanted to know whether all actions in the bug
report were actually necessary. We thus subjected the log to test
case minimization, in order to find a failure-inducing minimum of
user actions. Out of the 711 X events, only 95 were related to user
actions-that is, moving the mouse pointer, pressing or releasing
the mouse button, and pressing or releasing a key on the keyboard.
These 95 user actions were subjected to minimization.
The results of this run are shown in Figure 9. After 82 test runs (or
out of 95 user actions are left:
1. Press the P key while the Alt modifier key is held. (Invoke
the Print dialog.)
2. Press mouse button 1 on the Print button without a modifier.
(Arm the Print button.)
3. Release mouse button 1. (Start printing.)
User actions removed include moving the mouse pointer, selecting
the Print to file option, altering the default file name, setting the
print margins to .50, and releasing the P key before clicking on
Print-all this is irrelevant in producing the failure. 9
Since the user actions can hardly be further generalized, we turn our
attention to another input source-the failure-inducing HTML code.
The original Search for bug page has a length of 39094 characters
or 896 lines. In order to minimize the HTML code, we chose a
hierarchical In a first run, we wanted to minimize the
number of lines (that is, each c i was identified with a line); in a later
run, we wanted to minimize the failure-inducing line(s) according
to single characters.
9 It is relevant, though, that the mouse button be pressed before it is
released.
Table
Number of required test runs101000
number
of
lines
tests executed
tcmin log
query.html
Figure
8: Minimizing Mozilla HTML input
The results of the lines run are shown in Figure 8. After 57 test
runs, the ddmin algorithm minimizes the original 896 lines to a 1-
line input:
This is the HTML input which causes Mozilla to crash when being
printed. As in the GCC example of Section 5, the actual failure-
inducing input is very small. Further minimization 10 reveals that
the attributes of the SELECT tag are not relevant for reproducing
the failure, either, such that the single input
already suffices for reproducing the failure. Overall, we obtain the
following self-contained minimized bug report:
# Create a HTML page containing " "
# Load the page and print it using Alt+P and Print.
# The browser crashes with a segmentation fault.
As long as the bug reports can be reproduced, this minimization
procedure can easily be repeated automatically with the 5595 other
bugs listed in the Bugzilla database 11 . All one needs is a HTML
input, a sequence of user actions, an observable failure-and a little
time to let the computer simplify the failure-inducing input.
This minimization was done by hand. We apologize.
11 as of 14 Feb 2000,
number
of
X-events
tests executed
tcmin log
MN events removed
Figure
9: Minimizing Mozilla user actions
8. RELATED WORK
When you have two competing theories which make exactly the
same predictions, the one that is simpler is the better.
As stated in the introduction, we are unaware of any other technique
that would automatically simplify test cases to determine failure-
inducing input. One important exception is the simplification of
test cases which have been artificially produced. In [11], Don Slutz
describes how to stress-test databases with generated SQL state-
ments. After a failure has been produced, the test cases had to be
simplified-after all, a failing 1,000-line SQL statement would not
be taken seriously by the database vendor, but a 3-line statement
would. This simplification was realized simply by undoing the earlier
production steps and testing whether the failure still occurred,
In general, delta debugging determines circumstances that are relevant
for producing a failure (in our case, parts of the program in-
put.) In the field of automated debugging, such failure-inducing
circumstances have almost exclusively been understood as failure-
inducing statements during a program execution. The most significant
method to determine statements relevant for a failure is program
slicing-either the static form obtained by program analysis
[14, 12] or the dynamic form applied to a specific run of the
program [1, 3].
The strength of analysis is that several potential failure causes can
be eliminated due to lack of data or control dependency. This
does not suffice, though, to check whether the remaining potential
causes are relevant or not for producing a given failure. Only by
experiment (that is, testing) can we prove that some circumstance
is relevant-by showing that there is some alteration of the circumstance
that makes the failure disappear. When it comes to concrete
failures, program analysis and testing are complementary: analysis
disproves causality, and testing proves it.
It would be nice to see how far systematic testing and program
analysis could work together and whether delta debugging could
be used to determine failure-inducing statements as well. Just as
determining which parts of the input were relevant in producing the
failure, debugging could determine the failure-relevant statements
in the program. Critical slicing [2] is a related approach
which is test-based like delta debugging; additional data flow analysis
is used to eliminate circumstantial positives.
The ddmin algorithm presented in this paper is an alternative to
the original delta debugging algorithm dd + presented in [15]. Like
takes a set of changes and minimizes it according to
a given test; in [15], these changes affected the program code and
were obtained by comparing two program versions.
The main differences between ddmin and dd are:
. dd + determines the minimal difference between a failing and
a non-failing configuration, while ddmin minimizes the difference
between a failing and an empty configuration.
. dd + is not well-suited for failures induced by a large combination
of changes. In particular, dd + does not guarantee a
1-minimal subset, which is why it is not suited for minimizing
test cases.
. dd assumes monotony: that is, whenever
then holds for every subset of c as well. This
assumption, which was found to be useful for changes to program
code, gave dd + a better performance when most tests
produced determinate results.
We recommend ddmin as a general replacement for dd + . To exploit
monotony in ddmin, one can make test(c) return 4 whenever a
superset of c has already passed the test.
9. FUTURE WORK
If you get all the way up to the group-signed T-Shirt, you can
qualify for a stuffed animal as well by doing 12 more.
Mozilla BugAThon call
Our future work will concentrate on the following topics:
Domain-specific simplification methods. Knowledge about the
input structure can very much enhance the performance of
the ddmin algorithm. For instance, valid program inputs are
frequently described by grammars; it would be nice to rely
on such grammars in order to exclude syntactically invalid
input right from the start. Also, with a formal input descrip-
tion, one could replace input by smaller alternate input rather
than simply cutting it away. In the GCC example, one could
try to replace arithmetic expressions by constants, or program
blocks by no-ops; HTML input could be reduced according
to HTML structure rules.
Optimization. In general, the abstract description of the ddmin algorithm
leaves a lot of flexibility in the actual implementation
and thus provides "hooks" for several domain-specific
optimizations:
. The implementation can choose how to partition c into
subsets c i . This is the place where knowledge about the
structure of the input comes in handy.
. The implementation can choose which subset to test
first. Some subsets may be more likely to cause a failure
than others.
. The implementation can choose whether and how to
handle multiple independent failure-inducing inputs-
that is, the case where there are several subsets c i with
Options include
- to continue with the first failing subset,
- to continue with the smallest failing one, or
- to simplify each individual failing subset.
Our implementation currently goes for the first failing
subset only and thus reports only one subset. The reason
is economy: it is wiser to fix the first failure before
checking for further similar failures.
Program analysis. So far, we have treated all tested programs as
black boxes, not referring to source code at all. However,
there are several program analysis methods available that can
help in relating input to a specific failure, or that can simply
tell us which parts of the input are related (and can thus be
changed in one run) and which others not. A simple dynamic
slice of the failing test case can tell us which input actually
influenced the program and which input never did. The combination
of input-centered and execution-centered debugging
methods remains to be explored.
Maximizing passing test cases. Right now, ddmin makes no distinction
between passing and unresolved tests. There are several
settings, however, where such a distinction may be use-
ful, and where we could minimize the difference between a
passing and a failing test-not only by minimizing the failure-
inducing input, but also by maximizing the passing input. We
expect that such a two-folded approach pinpoints the failure
faster and more precisely.
Other failure-inducing circumstances. Changing the input of the
program is only one means to influence its execution. As
stated in Section 2, a # i can stand for any change in the
circumstances that influences the execution of the program.
We will thus research whether delta debugging is applicable
to further failure-inducing circumstances such as executed
statements, control predicates or thread schedules.
10. CONCLUSION
Debugging is still, as it was
a matter of trial and error.
Henry Lieberman, The Debugging Scandal
We have shown how the ddmin algorithm simplifies failure-inducing
input, based on an automated testing procedure. The method can
be (and has been) applied in a number of settings, finding failure-
inducing parts in the program invocation (GCC options), in the program
input (GCC, Fuzz, and Mozilla input), or in the sequence of
user interactions (Mozilla user actions).
We recommend that automated test case simplification be an integrated
part of automated testing. Each time a test fails, delta de-bugging
could be used to simplify the circumstances of the fail-
ure. Given sufficient testing resources and a reasonable choice of
changes # i that influence the program execution, the ddmin algorithm
presented in this paper provides a simplification that is
straight-forward and easy to implement.
In practice, testing and debugging typically come in pairs. How-
ever, in debugging research, testing has played a very minor role.
This is surprising, because re-testing a program under changed circumstances
is a common debugging approach. Delta debugging
does nothing but to automate this process. Eventually, we expect
that several debugging tasks can in fact be stated as search and
minimization problems, based on automated testing-and thus be
solved automatically.
More details on the case studies listed in this paper can be found
in [4]. Further information on delta debugging, including the full
WYNOT implementation, is available at
http://www.fmi.uni-passau.de/st/dd/ .
Acknowledgements
. Mirko Streckenbach provided helpful insights
on UNIX internals. Tom Truscott pointed us to the GCC error.
Holger Cleve, Jens Krinke and Gregor Snelting provided valuable
comments on earlier revisions of this paper. Special thanks go to
the anonymous reviewers for their constructive comments.
11.
--R
Dynamic program slicing.
Critical slicing for software fault localization.
Minimierung fehlerverursachender Eingaben.
An empirical study of the reliability of UNIX utilities.
Fuzz revisted: A re-examination of the reliability of UNIX utilities and services
Mozilla web site.
Mozilla web site: The Gecko BugAThon.
Massive stochastic testing of SQL.
A survey of program slicing techniques.
Programmers use slices when debugging.
--TR
Dynamic program slicing
An empirical study of the reliability of UNIX utilities
Critical slicing for software fault localization
Foundations of software engineering
Yesterday, my program worked. Today, it does not. Why?
An efficient relevant slicing method for debugging
Programmers use slices when debugging
Massive Stochastic Testing of SQL
--CTR
Simon Carter , Malcolm Graham , Paul Strooper , Zhiguo Yuan, Mutation analysis to verify feature matrices for isolating errors in simulation models, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.29-34, February 01, 2003, Adelaide, Australia
Zhang , Neelam Gupta , Rajiv Gupta, Locating faults through automated predicate switching, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China
Zhang , Haifeng He , Neelam Gupta , Rajiv Gupta, Experimental evaluation of using dynamic slices for fault location, Proceedings of the sixth international symposium on Automated analysis-driven debugging, p.33-42, September 19-21, 2005, Monterey, California, USA
Andy Podgurski , David Leon , Patrick Francis , Wes Masri , Melinda Minch , Jiayang Sun , Bin Wang, Automated support for classifying software failure reports, Proceedings of the 25th International Conference on Software Engineering, May 03-10, 2003, Portland, Oregon
Kai-hui Chang , V. Bertacco , I. L. Markov, Simulation-based bug trace minimization with BMC-based refinement, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.1045-1051, November 06-10, 2005, San Jose, CA
Zhang , Neelam Gupta , Rajiv Gupta, A study of effectiveness of dynamic slicing in locating real faults, Empirical Software Engineering, v.12 n.2, p.143-160, April 2007
Zhang , Neelam Gupta , Rajiv Gupta, Pruning dynamic slices with confidence, ACM SIGPLAN Notices, v.41 n.6, June 2006
Gregg Rothermel , Sebastian Elbaum , Alexey Malishevsky , Praveen Kallakuri , Brian Davia, The impact of test suite granularity on the cost-effectiveness of regression testing, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
Zhang , Neelam Gupta , Rajiv Gupta, Locating faulty code by multiple points slicing, SoftwarePractice & Experience, v.37 n.9, p.935-961, July 2007
Testing malware detectors, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Mark Last , Menahem Friedman , Abraham Kandel, The data mining approach to automated software testing, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Sebastian Elbaum , Hui Nee Chin , Matthew B. Dwyer , Jonathan Dokulil, Carving differential unit test cases from system test cases, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
Gregg Rothermel , Sebastian Elbaum , Alexey G. Malishevsky , Praveen Kallakuri , Xuemei Qiu, On test suite composition and cost-effective regression testing, ACM Transactions on Software Engineering and Methodology (TOSEM), v.13 n.3, p.277-331, July 2004 | automated debugging;combinatorial testing |
349130 | Improving the precision of INCA by preventing spurious cycles. | The Inequality Necessary Condition Analyzer (INCA) is a finite-state verification tool that has been able to check properties of some very large concurrent systems. INCA checks a property of a concurrent system by generating a system of inequalities that must have integer solutions if the property can be violated. There may, however, be integer solutions to the inequalities that do not correspond to an execution violating the property. INCA thus accepts the possibility of an inconclusive result in exchange for greater tractability. We describe here a method for eliminating one of the two main sources of these inconclusive results. | INTRODUCTION
Finite-state verication tools deduce properties of nite-
state models of computer systems. They can be used to
check such properties as freedom from deadlock, mutually
exclusive use of a resource, and eventual response to a re-
quest. If the model represents all the executions of a system
(perhaps by making use of some abstraction), a nite-state
verication tool can take into account all the executions of
Research partially supported by the National Science Foundation
under grant CCR-9708184. The views, ndings, and
conclusions presented here are those of the authors and
should not be interpreted as necessarily representing the ocial
policies or endorsements, either expressed or implied, of
the National Science Foundation, or the U.S. Government.
the system. Moreover, nite-state verication tools can be
applied at any stage of system development at which an appropriate
model can be constructed. Such tools thus represent
an important complement to testing, especially for concurrent
systems where nondeterministic behavior can lead to
very dierent executions arising from the same input data.
The main obstacle to nite-state verication of concurrent
systems is the state explosion problem: the number of states
a concurrent system can reach is, in general, exponential
in the number of concurrent processes in the system. This
problem confronts the analyst immediately|even for small
systems, the number of reachable states can be large enough
so that a straightforward approach that examines each state
is completely infeasible|and complexity results tell us that
there is no way to avoid it completely. Every method for
nite-state verication of concurrent systems must pay some
price, in accuracy or range of application, for practicality.
The Inequality Necessary Conditions Analyser (INCA) is
a nite-state verication tool that has been used to check
properties of some systems with very large state spaces. The
INCA approach is to formulate a set of necessary conditions
for the existence of an execution of the program that violates
the property. If the conditions are inconsistent, no
execution can violate the property. If the conditions are
consistent, the analysis is inconclusive; since the conditions
are necessary but not su-cient, it may still be the case that
no execution of the program can violate the property. INCA
thus accepts the possibility of an inconclusive result in exchange
for greater tractability. There are two main sources
of inconclusive results. In this paper, we show how one of
these, caused by cycles in nite state automata representing
the components of the concurrent system, can be eliminated
at what seems to be only moderate cost.
In the next section, we describe the INCA approach. Section
3 explains our technique for improving INCA's preci-
sion, and the fourth section presents some preliminary data
on its application. The nal section summarizes the paper
and discusses other issues related to the precision of INCA.
2. INCA
A complete discussion of the INCA approach, along with a
careful analysis of its expressive power, is contained in [8]. In
this paper, we will use a small (and quite contrived) example
to sketch the basic INCA approach and show how certain
cycles in the automata corresponding to the components of
a concurrent system can lead to imprecision in the INCA
analysis. We refer readers who want more detail to [8].
2.1 Basic Approach
The basic INCA approach is to regard a concurrent system
as a collection of communicating nite state automata
Transitions between states in these FSAs correspond
to events in an execution of the system. INCA treats
each FSA as a network with
ow, and regards each occurrence
of a transition from state s to state t, corresponding
to an event e, as a unit of
ow from node s to node t. The
sequence of transitions in a particular FSA corresponding
to events in a segment of an execution of the system thus
represents a
ow from one state of the FSA to another.
To check a property of a concurrent system using INCA, an
analyst species the ways that an execution might violate
the property in terms of a sequence of segments of an exe-
cution. Suppose that an analyst wants to show that event
b can never be preceded by event a in any execution of the
system. A violation of this property is an execution in which
a occurs and then b occurs. In INCA this could be specied
as a single segment running from the start of the execution
until the occurrence of a b, with the requirement that an a
occur somewhere in the segment. (It could also be specied
as a sequence of two segments, the rst running from the
start of the execution until an occurrence of an a, and the
second starting immediately after the rst and ending with
a b. The former specication is generally more e-cient, but
the latter may provide additional precision in some cases.
See Section 2.2.) INCA provides a query language allowing
the analyst to specify various aspects of the segments (called
\intervals" in the INCA query language) of execution.
By generating the equations describing
ow within each FSA
(requiring that the
ow into a node equal the
ow out) according
to the specied sequence of segments of a system
execution, and adding equations and inequalities relating
certain transitions in dierent FSAs according to the semantics
of communication in the system, INCA produces a
system of equations and inequalities. Any execution that
satises the analyst's specication (and therefore violates
the property being checked) corresponds to an integer solution
of this system of equations and inequalities. INCA then
uses standard integer linear programming (ILP) methods to
determine whether there is an integer solution. If no integer
solution exists, no execution can violate the property, and
the property holds for all executions of the concurrent sys-
tem. If there is an integer solution, however, we do not know
that the property can be violated. The system of equations
and inequalities represents only necessary conditions for the
existence of an execution violating the property, and it is
possible for a solution to exist that does not correspond to
a real execution.
To see more concretely how this works, consider the Ada
program shown in Figure 1. This program describes three
concurrent processes (tasks). Task t1 begins by rendezvous-
ing with task t2 at the entry c. It then enters a loop. At
the select statement, t1 nondeterministically chooses to rendezvous
with t2 at entry a or with t3 at entry b, if both are
ready to communicate at the appropriate entries. If t1 ac-
package simple is
task t1 is task t2 is
entry a; end t2;
entry b;
entry c; task t3 is
package body simple is
task body t1 is task body t2 is
begin begin
accept c; t1.c;
loop loop
select t1.a;
accept a; end loop;
loop end t2;
select
accept a;
or
accept c;
exit; task body t3 is
or end t3;
accept b;
loop
accept a;
Figure
1: A small example
cepts a communication from t2 at entry a, it then enters a
loop in which it accepts rendezvous at entry a until it accepts
one at entry c. If t1 instead accepts a communication from
t3 at entry b, it then tries forever to repeatedly rendezvous
with t2 at entry a.
Figure
2 shows the FSAs constructed by INCA for this pro-
gram. The states and transitions are numbered for reference.
Each transition in this example represents the occurrence of
a rendezvous between two tasks; in the gure, each transition
is labeled with the entry at which the corresponding
rendezvous takes place.
Suppose that we wish to check that an occurrence of a rendezvous
at entry b cannot be preceded by a rendezvous at
entry a. As described earlier, we may specify the violation as
a segment of an execution running from the start of execution
until the occurrence of a rendezvous at b and containing
a rendezvous at a. The
ow equations for each task will then
describe the possible
ows from the initial state of the task
to one of the states in which that task could be at the end
of the segment.
Since the segment ends with a rendezvous at the entry b,
represented by the transition numbered 2 in the FSA corresponding
to task t1 and the transition numbered 9 in the
FSA corresponding to task t3, we know that the FSA t1
must be in state 3 and the FSA t3 must be in state 8 at
the end of the segment. Our
ow equations for t1 therefore
describe
ow starting in state 1 and ending in state 3, while
the
ow equations for t3 describe
ow starting in state 7
and ending in state 8. For t2, the fact that a rendezvous at
a occurs in the segment implies that that FSA must be in
state 6 at the end of the segment, so the
ow equations for
t2 describe
ow from state 5 to state 6.
To produce these
ow equations, let x i be a variable measur-
a
c
a
a
a
Figure
2: FSAs for example
ing the
ow along the transition numbered i. At each state,
we generate an equation setting the
ow in equal to the
ow
out. We must, however, take into account the implicit
ow
of 1 into the initial state of each FSA and the implicit
ow
of 1 out of the end state of the
ow. Thus, for example, the
equation for state 1 is
since the
ow in is 1 because state 1 is the initial state and
the only
ow out is on transition 1. Similarly, the equation
for state 8 is
since the only
ow in is on transition 9 and there is implicit
ow out of 1 since the
ow in this FSA ends in state 8.
To complete the system of equations and inequalities, we
must add equations to re
ect the fact that the two tasks
participating in a rendezvous must agree on the number of
times it occurs. For instance, we need the equation
saying that the number of occurrences of the rendezvous at
entry a in the FSA for t1 is the same as in the FSA for t2.
We also need an inequality to express the requirement that
there is at least one occurrence of a rendezvous at a. We use
to state this. The full system of equations and inequalities
used to check the property that a rendezvous at entry b
cannot be preceded by a rendezvous at entry a is shown in
Figure
3. (The description here is actually somewhat over-
simplied; INCA performs several optimizations to reduce
the size of the system of inequalities and the real system of
inequalities produced by INCA would be smaller. For ex-
ample, INCA would observe that there cannot be
ow along
transition 3 in a violating execution (because the segment
of execution must end with transition 2), and would eliminate
the variable x3 from the system. It would also do a
form of constant propagation to eliminate other variables
and equations.)
Essentially all research on nite-state verication tools can
Flow Equations:
State Equation
Communication Equations:
Entry Equation
a x3
Requirement Inequality:
a occurs x8 1
Figure
3: System of equations and inequalities for
example
be viewed as aimed at ameliorating the state explosion problem
for some interesting systems and properties. The approach
taken by INCA avoids enumerating the reachable
states of the system and is inherently compositional, in the
sense that that the equations and inequalities are generated
from the automata corresponding to the individual
processes, rather than from a single automaton representing
the full concurrent system. The size of the system of
equations and inequalities is essentially linear in the number
of processes in the system (assuming the size of each
process is bounded). Furthermore, the use of properly chosen
cost functions in solving the problems can guide the
search for a solution. ILP is itself an NP-hard problem in
general, and the standard techniques for solving ILP problems
(branch-and-bound methods) are potentially exponen-
tial. In practice, however, the ILP problems generated from
concurrent systems have large totally unimodular subproblems
and seem particularly easy to solve. Experience suggests
that the time to solve these problems grows approximately
quadratically with the size of the system of inequalities
(and thus with the number of processes in the system).
Comparisons of this approach with other nite-state verica-
tion methods [2, 3, 4, 5] show that the performance of each
method varies considerably with the system and property
being veried, but that INCA frequently performs as well
as, or better than, such tools as SPIN and SMV. The INCA
approach has also been extended to check timing properties
of real-time systems [1, 6] and to prove trace equivalence of
certain classes of systems [7].
2.2 Sources of Imprecision
The systems of equations and inequalities generated by INCA
represent necessary conditions for there to be a violation of
the property being veried. As noted earlier, however, they
only represent necessary, not su-cient, conditions. A solution
of the system of equations and inequalities may not
correspond to an actual execution.
There are two main reasons for this. The rst has to do with
the order in which events occur. Strictly speaking, the equations
and inequalities generated by INCA refer only to the
total number of occurrences of the various events in each
segment of the execution, and do not directly impose restrictions
on the order in which those events occur within
the segment. In fact, the
ow equations for a single FSA
typically imply fairly strong conditions on order, but the
communication equations relating the occurrence of events
in dierent FSAs do not impose strong restrictions on the
order of occurrence of events from dierent processes. To
see why, consider a system comprising two processes. The
rst process begins by trying to communicate with the second
process on channel A and then, after completing that
communication, tries to communicate with the second process
on channel B. The second process tries to complete
the communications in the reverse order. This system will
obviously deadlock, but the equations generated by INCA
would say only that the number of communications on each
channel in the rst process is equal to the number in the
second process, allowing a solution in which each communication
occurs. (This is a slight over-simplication. INCA
would actually detect the deadlock in this case, but not in
more complicated examples with several processes.) The
only mechanism INCA provides for directly constraining the
order of events in dierent processes is the use of additional
segments of the execution. While this is often enough to
eliminate solutions that do not correspond to real executions
of the system, it is expensive and restricts the range
of application of INCA. We will return to this point in the
nal section of this paper.
The second source of imprecision is the existence of cycles
in the FSAs. Consider the
ow equation for state 3 that is
shown in Figure 3. Transition 3 is a self-loop at state 3, and
ow along that transition counts both as
ow into state 3
and out of state 3. The equation x2 does not
constrain the variable x3 at all; we can simply cancel the x3
terms. Similarly, the variables x5 and x8 are not constrained
by the
ow equations in which they appear. These variables
are constrained only by the communication equation that
three of these variables are
otherwise unconstrained, this equation does not restrict the
solution set.
In fact, although the system of Figure 1 has no execution in
which a prex ending with a rendezvous at entry b contains
a rendezvous at entry a, there is a solution to the system
of equations and inequalities shown in Figure 3 with x1 ,
x2 , x5 , x7 , x8 , and x9 all equal to 1, and x3 , x4 , and x6
all equal to 0. In this solution, the requirement that the
number of rendezvous at a be at least 1 is met by setting the
unconstrained variables x5 and x8 to 1. Figure 4 shows the
FSAs with the transitions having
ow indicated by bold arcs.
The
ow in the FSA for t1 has two connected components,
one from the initial state to state 3, as expected, and one
made up of
ow in the cycle at state 4, not connected to
the
ow from state 1 to state 3. It is obvious that the
ow
in each FSA corresponding to an actual execution must be
connected, so this is a spurious solution, one that does not
correspond to a real execution.
This example illustrates the problem but is not of much
independent interest. The same problem, however, occurs
a
c
a
a
a
Figure
4: Solution with disconnected cycle
with some frequency in the analysis of more interesting sys-
tems. For instance, in our recent analysis of the Chiron
user interface development system [2], we encountered solutions
with disconnected cycles in trying to verify 2 of the 10
properties we checked. In those cases, we were able to re-formulate
the properties by specifying additional segments,
verifying other properties that allowed us to eliminate some
solutions, or choosing other events to represent the high-level
requirement. These modications, however, represent
a considerable expense in increased analyst eort and veri-
cation time. In the next section, we describe a technique for
eliminating these solutions with more than one component
of
ow in an FSA.
3. ELIMINATING SPURIOUS CYCLES
3.1 A Straightforward Approach
A related problem is well known in the optimization liter-
ature. When formulating the Traveling Salesman Problem
as an integer programming problem, it is essential to ensure
that the solution represents a single tour visiting all
the cities, rather than a collection of disconnected subtours
each visiting a proper subset of the cities. A standard approach
for eliminating solutions with disconnected subtours
is to add inequalities that prevent the solution from visiting
cities in a subset U unless the solution includes an arc
from a city not in U to one in U . Thus, if the variable x i;j
is 1 if the solution represents a tour in which the salesman
goes directly from city i to city j, and 0 otherwise, the standard
formulation of the Traveling Salesman problem would
include, for each j, the inequality
to enforce the requirement that each city is entered and left
exactly once. To eliminate the possibility of a subtour in
the subset U we would add the inequality
x i;j 1, (2)
which requires that the salesman travel from a city outside
U to a city in U . (Of course, we need an inequality like (2)
for every subset U of size at least 2 and at most N 2, where
N is the number of cities.)
In our case, to prevent a solution in which there is
ow in
a disconnected cycle C, we can add an inequality requiring
that, when there is
ow in C, there must be
ow entering
C from outside. This is a little more complicated than the
situation for the Traveling Salesman Problem. In that case,
we know by (1) that the solution must enter each city exactly
once. In our case, we do not want to require
ow into
one of the states making up C unless there is
ow along
one of the transitions in C. For instance, we only want to
require
ow on transition 4 in our example when there is
ow on transition 5. To do this in general, we would need a
quadratic inequality such as
x4x5 x5 . (3)
Integer quadratic programming is, however, much harder
than integer linear programming and we would like to avoid
introducing quadratic inequalities. The standard technique
is to impose an upper bound B on all the variables (i.e.,
to assume that no transition occurs more than B times),
and to replace the quadratic inequality (3) with the linear
inequality
The integer solutions of (3) having x4 ; x5 B are exactly
the same as those of (4). (We note that imposing an upper
bound on all the variables would mean that INCA's analysis
is no longer strictly conservative. If the system of inequalities
has no solutions with the x i all less than or equal to
B, we only know that no execution on which each transition
occurs at most B times can violate the property. Since B
can be taken to be quite large, such as 10; 000 or 100; 000,
this restriction is unlikely to be a serious one in practice.)
The problem with these approaches is that they may require
too many extra inequalities. The number of subtours that
have to be eliminated in the Traveling Salesman Problem is
essentially the number of subsets of the set of cities and is
clearly exponential in the number of cities. Similarly, the
number of cycles in an FSA can be essentially equal to the
number of subsets of its set of states. We have constructed
a small concurrent Ada program with only 90 lines of code
in which the FSA for one task has only 42 states but has
1,160,290,624 distinct subsets of states each forming at least
one cycle. An integer programming problem with that many
inequalities is infeasible. A better method is required.
3.2 A More Practical Method
In this section, we describe a method for preventing spurious
cycles that requires, for each FSA and segment of execution,
new variables and S new inequalities, where
S is the number of states in the FSA and T is the number
of transitions.
The basic idea is essentially as follows. Suppose we have a
solution to the system of equations and inequalities originally
generated by INCA. For each FSA and each segment
of execution, we attempt to construct a subgraph with the
same vertices as the FSA but whose edges are a subset of
those that have positive
ow in the solution. We require
that (i) if there is
ow into a vertex v in the solution, some
edge terminating in v must occur in the subgraph, and (ii)
each vertex v of the subgraph can be assigned a \depth" dv
in such a way that the depth of a given node is greater than
that of any of its predecessors in the subgraph.
If the original solution has no disconnected cycles, we can
choose for our subgraph a spanning tree for the edges with
ow and take the depth of a vertex to be the distance from
the root of the tree to the vertex. If the solution has a
disconnected cycle C, however, we cannot construct such
a subgraph. To see why, suppose we could construct the
subgraph, and let v be a vertex in C for which dv du
for all u 2 C. Since there is
ow into v in the solution, v
must have some predecessor u in the subgraph. Since the
cycle C is disconnected from the
ow starting at the initial
state of the FSA, the state u must also lie in C. But if
u is a predecessor of v in the subgraph, we have dv > du ,
contradicting the minimality of dv on C.
Of course, we do not want to consider the possible solutions
to the system of equations and inequalities generated
by INCA one at a time, attempting to construct the sub-graph
separately for each solution. Instead, we add new
variables and inequalities, leading to an augmented system
of equations and inequalities whose integer solutions correspond
exactly to the integer solutions of the original system
for which the appropriate subgraph can be constructed.
We describe the procedure for generating this augmented
system for the case of a single FSA F and a single segment
of execution. For each variable x i in the original system corresponding
to a transition in F , we introduce a new variable
This variable will be 1 if the corresponding edge is in the
subgraph, and 0 otherwise.
For each state v in F , we introduce a new variable dv with
bounds
where N is some integer which is at least the maximum
length of any non-self-intersecting path through the FSA.
For instance, N can be taken to be the number of states in
F . The variable dv will be the depth of v.
We then generate inequalities involving these new variables.
Each variable s i corresponds to a transition from some state
u of F to a state v. We generate the inequalities
The rst inequality says that s i must be 0 if x i is 0, so
that the corresponding edge can be in the subgraph only if
the solution has positive
ow along that edge. The second
inequality requires that dv be greater than du if the edge
from u to v is in the subgraph. If the edge is not in the
subgraph (i.e., if s i is 0), the inequality reads dv du N ,
and the bounds on dv and du make that vacuous.
Finally, let In(v) denote the number of transitions into the
state v. For each state v of F , other than the initial state,
we generate the inequality
where the sums are taken over all transitions into the state
and B is an upper bound on all the variables. (As noted
earlier, B can be taken to be quite large.) If all the x j
are 0, this inequality is satised vacuously, but if any x j is
positive, the inequality forces some s j to be positive. This
means that, in a solution with
ow into state v, some edge
terminating in v belongs to the subgraph.
The argument sketched at the beginning of this section proves
the following theorem, showing that this method eliminates
only solutions with disconnected cycles.
Theorem 1. Let P be the system of equations and inequalities
generated by INCA to check a particular property
of a given concurrent system. Let P 0 be the augmented system
constructed from P as described above. A solution of
assigns values to all the variables in P as well as additional
variables; we thus obtain an assignment of values to
the variables in P from a solution to P 0 by projection. The
set of integer solutions of P with all variables taking values
at most B and no disconnected cycles is exactly equal to the
set of projections of integer solutions of P 0 with all variables
taking values at most B.
In general, a query can specify more than one execution
segment, so the situation is a bit more complicated. In the
general case, INCA constructs a
owgraph as follows. First,
it creates one copy of each FSA for each segment specied in
the query. Each copy can then be optimized independently,
removing unnecessary states or transitions, based on the restrictions
imposed in the query for that segment. As seen in
the example in Section 2.1, INCA can determine from the
query the states in which each FSA could be at the end of
each segment. It then adds a \connect" edge from each of
the possible end states for segment i to the corresponding
state in segment i + 1. These edges connect the
ow representing
events in one segment of an execution to
ow in the
next segment. Finally, an initial node is added with connect
edges to certain states in the rst segment of each task,
and a nal node is added with incoming connect edges from
the possible end states in the nal segment of each task.
This
owgraph is the actual structure which INCA uses to
generate the ILP system.
The algorithm described in this section can actually be applied
to any subset of vertices in the
owgraph, rather than
to the whole
owgraph, thereby eliminating only those spurious
solutions in which there is a disconnected cycle contained
in that subset. For given a subset W of vertices of
the
owgraph, one can form a new graph V as follows. Create
a vertex in V for each vertex in W , and also add an
initial and a nal vertex to V . For each edge joining two
vertices in W , create a corresponding edge in V . For each
edge originating outside W and terminating in W , create
a corresponding edge in V from the initial vertex to the
corresponding vertex. For each edge originating in W and
terminating outside of W , create a corresponding edge in V
from the corresponding vertex to the nal vertex.
Each edge in V has associated to it an ILP variable, which
is the variable associated to the corresponding edge in the
original
owgraph. So we can apply the algorithm to V ,
generating new variables and inequalities which are added
to those INCA originally produced from the
owgraph, and
the same arguments given above go through.
Restricting the algorithm in this way has many practical ap-
plications. Suppose, for example, that a solution contains
a single disconnected cycle. It is clear that that cycle must
lie within a single segment of a single task in the
owgraph.
That is because there are no edges from a state in one segment
to a state in a preceding segment, and there are no
edges from states of one task to another. Now, to apply the
cycle-elimination algorithm to the entire
owgraph might be
very expensive, both in terms of the time and memory to
generate the new variables and constraints, and the time and
memory needed by the ILP tool to solve the new system. In
this case, it makes sense to apply the algorithm only to the
problematic segment of the problematic task. Typically, the
segments behave quite independently, and the existence of
spurious cycles in one segment is not related to the existence
of spurious cycles in other segments.
One might be tempted to be as conservative as possible and
apply the cycle-elimination algorithm to only those vertices
involved in the oending cycle. This is usually fruitless, as,
more often than not, another spurious solution will be found
by expanding the cycle to include other vertices. However,
no matter how much the cycle expands, it still must lie entirely
in the single segment of the single task, and therefore
the best strategy might be to apply the algorithm to the entire
problematic segment in that task as soon as one spurious
cycle appears there.
4. PRELIMINARY EXPERIMENTS
The current version of INCA consists of about 12,000 lines
of Common Lisp. INCA writes out a le describing the system
of equations and inequalities in a standard format (the
MPS format), and we then use a commercial package called
CPLEX to read this le and solve the system. (We also
use a separate program to translate Ada programs into the
native input language of INCA). The optimizations INCA
uses to reduce the number of variables and inequalities make
the introduction of new variables and inequalities somewhat
complicated, and integrating our method into INCA will
involve a substantial programming eort. For our initial
exploration of the eect of applying our method, we have
therefore chosen to proceed by modifying the MPS le produced
by INCA. We have written a Java program that reads
this le, and a le describing the
owgraph, and produces a
new MPS le representing the augmented system of equations
and inequalities. We can then compare the performance
of CPLEX on the original system and the augmented
system. At this stage, however, we cannot measure how long
it would take INCA to generate the augmented system of
equations and inequalities.
For these experiments, we used INCA version 3.4, Harlequin
Lispworks 4.1.0, and CPLEX version 6.5.1 on a Sun Enterprise
3500 with two processors and 2 GB of memory, running
Solaris 2.6. The upper bound B representing the maximum
number of times an edge may be traversed in a violating execution
was taken to be 10; 000. We used the default options
on CPLEX, except for the following changes: mip strategy
nodeselect was set to 2, mip strategy branch was set to 1,
and mip limits solutions was set to 1. (The rst two affect
choices made in the branch-and-bound algorithm and
the third stops the search as soon as an integer solution is
found.) For each ILP problem, we ran CPLEX ve times
and took the average time. The times reported here were
collected using the time command, and include both user
and system time.
4.1 A Scalable Version of the Example from
Section 2
For the rst experiment, we created a scalable version of the
simple example described in Section 2.1. Given an integer
we modied the Ada program in Figure 1 to have n
copies of task t2 and to have alternatives in the outer
select statement. Each of the new copies of task t2 calls
the same entries in t1. (In detail, we replaced task t2 with n
copies of itself, calling these tc1,. ,tcn. In the body of t1,
we replaced the rst accept c line with n copies of itself and
replaced the body of text beginning with the rst accept a
and ending with the last or with n copies of itself.)
As before, we wish to verify that a rendezvous at entry a can
never precede a rendezvous at entry b. INCA constructs an
FSA for t1 in which there are 2n+4 nodes and 4n 2 +3 edges.
(The picture is slightly dierent from what one might expect
because we have added a start vertex and an end vertex,
and INCA performs some trimming of the FSA.) There are
distinct subsets of the vertex set for t1 which
cycles.
For each n, INCA nds a spurious solution involving a disconnected
cycle in t1. Applying the algorithm in Section 3.2
to the portion of the
owgraph coming from the FSA for
task t1, however, yields an ILP problem that CPLEX reports
has no integer solutions, thus verifying that an a can
never precede a b.
For n 3, the number of variables in the INCA-generated
ILP system is 4n 2 +2n, and the number of constraints (equa-
tions and inequalities) is 5n+ 1. The number of variables in
the new system is
and the number of constraints is
The time that it takes CPLEX to nd a spurious solution to
the original system and the time it takes to determine the inconsistency
of the augmented system are shown in Figure 5.
These times are very modest, all under 10 seconds, and are
in fact dwarfed by the time it takes INCA to generate its
internal representations of the problem and the original ILP
system. was about 30 minutes.) It seems,
however, that for large n, the substantial increase in the
number of constraints in the augmented system, due to the
large number of edges in the FSA for t1, does begin to have
a signicant impact on the time to solve the ILP problem.13579
time
Conclusive result with cycle elimination
Spurious solution without cycle elimination
Figure
5: CPLEX times for scaled simple example
4.2 Spurious Cycles in Chiron
The second experiment involves the Chiron user interface
system [9]. A Chiron client comprises some abstract data
types to be depicted, artists that maintain mappings between
these ADTs and the visual objects appearing on the
screen, and runtime components that provide coordination.
In particular, certain events indicating changes in the state
of the ADTs are dened, and an ADT Wrapper task noties
a Dispatcher task whenever an event occurs. The Dispatcher
maintains an array for each event that records which artists
are interested in being notied of that event. (Artists register
and unregister for an event to indicate their current
interest in being notied.) After receiving the event from
the ADT Wrapper, the Dispatcher then loops through the
artists in the appropriate array and calls an entry in each
artist to notify it of the event. The Chiron architecture is
highly concurrent and even a toy Chiron interface represents
about 1000 lines of Ada code. In [2], we compared the performance
of several nite-state verication tools (FLAVERS,
INCA, SMV, and SPIN) in checking a number of properties
of a Chiron interface with two artists and n dierent kinds
of events, for n ranging from 2 to 70.
One of the properties we wish to verify about this system,
called Property 4 in [2], is that the Dispatcher noties the
artists of the right event. For example, if the Dispatcher
receives event e1 from the ADT Wrapper, we wish to show
that it does not notify any artist of event e2 until it has
notied the appropriate artists of e1. To formulate this
property as an INCA query takes 2 segments.
We were in fact able to verify this property using INCA, but
only in systems where the number of kinds of events, n, is
at most 5. (FLAVERS and SPIN were able to verify this
property up to at least
To scale the problem further with INCA, we needed to decompose
the Dispatcher task into a subsystem. This entails
creating a new task Dispatch ei, for
maintains the array for event ei. The Dispatcher task itself
is left as an interface which just passes register, unregister,
and notication requests on to the appropriate Dispatch ei
in a way such that no additional concurrency is introduced.
(If the internal communications of the Dispatcher subsystem
are hidden, the new system is observationally equivalent
to the original one.) This decomposed system has the
advantage that as n increases, the size of each Dispatch ei
FSA remains constant, although the number of these tasks
increases. While in general this decomposition greatly improves
the performance of INCA, for this property INCA
yields an inconclusive result. The problem is a disconnected
cycle in the task Dispatch e1 in the second segment.
To get around this problem, we reformulated the property
using dierent events to represent the high-level property.
This depended on the prior verication of other properties
relating the events used in the original and new formulations
and was cumbersome and time-consuming. (Once the property
was reformulated, however, the performance of INCA
on this decomposed system was considerably better than
that of the other tools. By 30, the INCA time was already
roughly an order of magnitude better than the times
for the other tools and INCA could verify the property for
much larger values of n. The dierences in performance of
the tools on this property, for the two versions of the Chiron
system, are typical of what we observed on other properties.
The implications of this are discussed in [2].)
Using the cycle elimination algorithm described here, we
were able to verify the original property directly, for 2
70. In this case there are 23 nodes and 63 edges in the
problematic task/segment for all n. Hence for each n our
algorithm adds 86 variables and 148 constraints to the ILP
system. For n 3, the number of variables in the original
system is
where (n) is 58, 118, or 84, according as n is congruent
modulo 3 to 0, 1, or 2, respectively. (This re
ects the way
we chose to have artists register for events as we scaled up
the number of events.) The number of constraints in the
augmented system is
where similarly the value of (n) is 195, 281, or 235. In this
case, eliminating spurious cycles adds a constant number
of variables and constraints as n increases. The CPLEX
times for each n, for the original system for which CPLEX
found a spurious solution and the result of the analysis was
inconclusive, and for the augmented system for which the
property was conclusively veried, are given in Figure 6.
Again, the times are all under 5 seconds and represent a
very small portion of the total analysis time. (For
70, this was about 2.5 minutes.) The spike at
the CPLEX time for the augmented system seems to be
due to the occurrence of certain numerical problems for this
particular system.
4.3 The Cost of Unnecessarily Preventing Spurious
Cycles
We also tried adding the cycle elimination variables and constraints
to a system which already yielded a conclusive re-
sult. This might yield insight into the marginal cost of having
INCA add cycle elimination by default for any problem.
For this experiment, we used another property from [2]. In135
time
events
Conclusive result with cycle elimination
Spurious solution without cycle elimination
Figure
times for Chiron Property 4
this case, we used Property 1b, which says that an artist
never unregisters for an event unless it is already registered
for that event. As in [2], we restricted ourselves to checking
this for a single artist and event. The resulting property
requires 2 segments for its formulation as an INCA query.
Using the decomposed dispatcher version of the client code,
INCA veried this property without any need for cycle elim-
ination, for n 70. The number of variables in the INCA-
generated ILP system (for n
where (n) is 77, 146, or 107 according as n is congruent
modulo 3 to 0, 1, or 2, respectively. The number of constraints
is
where similarly (n) is 69, 96, or 81.
We then applied the cycle-elimination algorithm to all of the
(recall that there is a separate Dispatch ei for
each of the n event types) and both segments. (In the experiment
discussed in the previous section, we only applied
the algorithm to one FSA and one segment.) This entailed
adding
new variables to the system, where (n) is 552, 833, or 682,
and adding
new constraints, where (n) is 897, 1391, or 1123. The times
required by CPLEX to nd the conclusive result in each case
are graphed in Figure 7.
Although the ILP systems in the augmented case are quite
large (18,087 variables and 22,563 constraints for
the larger n, it still appears that CPLEX can determine the
inconsistency of the system in a very short time (less than
4 seconds). If this example is typical, the real cost in introducing
cycle elimination in INCA might lie in generating
the new ILP system, not in solving it.
5. CONCLUSIONS AND FUTURE WORK
time
events
Conclusive result with cycle elimination
Conclusive result without cycle elimination
Figure
7: CPLEX times for Chiron Property 1b
Some nite-state verication tools always provide a conclusive
result on any problem they can analyze. A tool that
walks a graph of the reachable states of a concurrent system
will never report that the system might deadlock when in
fact the system is deadlock-free (assuming, of course, that
the graph correctly represents the reachable state space of
the system). But such a tool must be able to store the full
set of reachable states, and is unable to report any results
for a system whose reachable state space exceeds the storage
available. Other tools, such as INCA, deliberately overestimate
the collection of possible executions of the system, and
thus accept the possibility of inconclusive results (or spurious
reports of the possible faults), in order to increase the
range of systems to which they can be applied.
For INCA, there are two main sources of imprecision in the
representation of executions of the system. The rst of these
is the fact that semantic restrictions on the order of occurrence
of events in dierent concurrent processes are generally
not represented in the equations and inequalities used
by INCA. The second source of imprecision is the fact that
the equations and inequalities allow solutions in which the
ow in the FSA representing a concurrent process may have
cycles not connected to the initial state. In this paper, we
have shown how imprecision caused by this second source
may be eliminated.
Specic cases of inconclusive results can often be addressed
by careful reformulation of the property being checked, although
this may require the verication of additional properties
to justify the reformulation. This process can require
very substantial amounts of eort on the part of the human
analysts, as well as considerable costs to carry out the necessary
verications. We have also sometimes addressed inconclusive
results by manually inserting special inequalities
to prevent disconnected
ow on a small number of specic
cycles. The problem with generalizing this approach is that
the number of cycles may well be exponential in the size
of the concurrent system, and each of the cycles requires a
separate inequality. Even if it were feasible to automate the
generation of these inequalities, the resulting ILP problems
would be far too large to solve. The numbers of new variables
and inequalities introduced by the method presented in
this paper are linear in the number of states and transitions
in the FSAs representing the processes of the concurrent
system being analyzed.
We have reported here the results of some preliminary experiments
aimed at assessing the cost, in increased time to
solve the systems of equations and inequalities, of applying
our method. These experiments suggest that the cost
is relatively small, especially when the eort of the human
analysts is taken into account. We plan to carry out additional
experiments of the same type, and to integrate our
technique into the INCA toolset so that we can also evaluate
the time needed to generate the additional variables and
inequalities.
We are also investigating approaches to eliminating some of
the imprecision caused by not representing restrictions on
the order of events in dierent processes. Fully representing
the restrictions imposed by the semantics of the programming
language or design notation may not be practical and
might limit the applicability of INCA in the same way that
having to store the full set of reachable states limits the
applicability of tools based on exploring the graph of reachable
states. We are therefore exploring methods that allow
the analyst to control the degree to which restrictions on
order are represented. For example, one approach that we
are considering is to formulate some of the
ow and communication
equations in such a way that they hold at every
stage of an execution, not just the end. These reformulated
ow and communication equations therefore enforce some of
the restrictions on the order of events in dierent processes.
They also determine a region in n-dimensional Euclidean
space, where n is the number of variables in the system of
equations and inequalities. We then look for a point satisfying
the full system of equations and inequalities that can
be reached by taking certain integer-sized steps through this
region. Successfully reducing this kind of imprecision will be
important in applying the INCA approach to many systems
where interprocess communication is only through access to
shared data.
6.
--R
Automated derivation of time bounds in uniprocessor concurrent systems.
An empirical comparison of static concurrency analysis techniques.
An empirical evaluation of three methods for deadlock analysis of Ada tasking
Evaluating deadlock detection methods for concurrent software.
A practical method for bounding the time between events in concurrent real-time systems
Towards scalable compositional analysis.
Using integer programming to verify general safety and liveness properties.
--TR
A practical technique for bounding the time between events in concurrent real-time systems
An empirical evaluation of three methods for deadlock analysis of Ada tasking programs
Automated Derivation of Time Bounds in Uniprocessor Concurrent Systems
Towards scalable compositional analysis
Using integer programming to verify general safety and liveness properties
Evaluating Deadlock Detection Methods for Concurrent Software
Comparing Finite-State Verification Techniques for Concurrent Software | INCA;integer programming;finite-state verification;cycles |
349141 | A thread-aware debugger with an open interface. | While threads have become an accepted and standardized model for expressing concurrency and exploiting parallelism for the shared-memory model, debugging threads is still poorly supported. This paper identifies challenges in debugging threads and offers solutions to them. The contributions of this paper are threefold. First, an open interface for debugging as an extension to thread implementations is proposed. Second, extensions for thread-aware debugging are identified and implemented within the Gnu Debugger to provide additional features beyond the scope of existing debuggers. Third, an active debugging framework is proposed that includes a language-independent protocol to communicate between debugger and application via relational queries ensuring that the enhancements of the debugger are independent of actual thread implementations. Partial or complete implementations of the interface for debugging can be added to thread implementations to work in unison with the enhanced debugger without any modifications to the debugger itself. Sample implementations of the interface for debugging have shown its adequacy for user-level threads, kernel threads and mixed thread implementations while providing extended debugging functionality at improved efficiency and portability at the same time. | INTRODUCTION
Threads have become an accepted abstraction of concurrency
using the shared-memory programming paradigm and
provide the means to exploit parallelism in a shared-memory
multi-processor environment. Today, many thread implementations
adhere to the POSIX Threads (Pthreads) standard
[21], which denes a common application interface
(API) to exhibit the functionality of threads. The Pthreads
standard describes the semantics in terms of the observable
behavior for this API but excludes constraints on implementation
choices. Hence, Pthreads implementations range
from user-level libraries [14, 17] via mixed-mode threads [16,
20, 8] to kernel-level implementations [2, 22, 1, 11].
Software development of threaded programs should be facilitated
by adhering to the Pthreads API at the level of
program design and implementation. The testing and de-bugging
stage, however, lacks support for threaded appli-
cations. The motivation for special testing and debugging
tools is given by a number of properties that distinguish
multi-threaded programs from single-threaded ones:
1. The control
ow of threads may interleave or even execute
in parallel.
2. Threads may suspend and resume execution voluntar-
ily, due to preemption or as a result of events (signals).
3. Synchronization between threads denes a partial order
of program execution.
The debugging process, which takes at least 50% of the
development eort together with testing, is aected for
threaded programs in several ways [7]. The following issues
illuminate common problems.
Conventional breakpoint debugging does not suce to
capture a single
ow of control for a program. The
programmer is accustomed to follow the control
ow of
one thread. When two consecutive breakpoints within
a thread are hit, other threads may have been executing
between these breakpoints. Furthermore, a break-point
in a subroutine called by dierent threads may
be hit in sequence for dierent threads at a time.
The state of threads and synchronization objects is not
visible during debugging due to a lack of debugger in-
formation. However, state information would be vital
to allow inferences about the execution stage of the
program and its progress relative to the partial order
of synchronization.
Thread scheduling cannot be controlled by the debug-
ger. It may, however, be desirable to forcibly suspend
or resume the execution of selected threads to identify
problems in the application by reducing interference
between or ensuring reaction to other threads, respectively
Thus, a thread-aware debugger should provide the following
features that address these issues to facilitate the debugging
of threads:
Thread-specic breakpoints stop the application only
when a certain thread reaches the breakpoint.
Status inquiries about threads and synchronization objects
show the progress of execution and the current
state of the objects.
Scheduling control provides the means to forcibly suspend
and resume threads.
Scheduling breakpoints halt the application upon a context
switch and serve as a means to track the interleaving
of execution between threads.
The work described in this paper also aims at providing
a
exible platform for both the debugger and a variety of
thread implementations to support thread-aware debugging.
Instead of customizing the debugger for each thread imple-
mentation, a common framework for controlling threads is
provided, which communicates with the threads of the ap-
plication. The thread implementation, on the other hand,
provides a standard interface for debugging to serve requests
by the debugger. This approach has several advantages:
Portability is ensured through an open interface for de-bugging
threads on one side and functional extensions
to the debugger on the other side. The former requires
that thread implementations support this interface by
providing at least part of the functionality but does
not assume any particular API for thread implemen-
tations, e.g., POSIX compliance is optional. The latter
is independent of the actual thread implementation
and remains unchanged, regardless of the extend of the
support by the open interface or the source language
of the application.
Extensibility is guaranteed by the communication interface
between the debugger and the threaded applica-
tion. This interface only denes a query language but
not the actual messages themselves to allow the addition
of new functionality and new messages in the future
without changes in the communication interface
on either side.
Flexibility is provided for partial support instead of
the full functionality of the interface for debugging
threads. The debugger remains functional but provides
less information and less control over threads
when only a part of the interface for debugging threads
exists. This allows partial implementations of the debug
interface where certain information is not available
or not accessible, e.g., when kernel threads prohibit ac-
cess/control of threads.
Optional invocation allows the application to run without
the thread debugging support while the same executable
may be used for debugging when needed. The
thread debug support can be dynamically loaded as an
add-on library only upon activation of the debugger.
The technical issues of these features are presented in detail
in the description of the design and implementation of
the thread-aware debugger and the interface for debugging
threads.
This paper is structured as follows. Section 2 gives an
overview of the design. Section 3 describes the open interface
for debugging threads whose implementation will depend
on the thread implementation. Section 4 introduces
the thread debug interface common to all implementations.
Section 5 presents the communication structure between debugger
and application. Section 6 summarizes the extensions
to the debugger. Section 7 describes the implemen-
tation. Section 8 lists the extended commands for thread-aware
debugging. Section 9 discusses related work. Finally,
presents the conclusions.
2. DESIGN OVERVIEW
The components of the framework for thread-aware debugging
comprise two executable components and two inter-
faces. The executable components are the application on
one side and the debugger on the other side. Since the application
is assumed to be multi-threaded, it also utilizes
a thread implementation. The debugger includes enhancements
for thread-aware debugging and for communication
with the application. The interfaces consist of a thread debug
interface (TDI) and the thread extensions for debugging
(TED). The TDI includes a query language interpreter
and provides the communication interface between debugger
and application. The TED comprises the open interface
for debugging threads as a thin layer over the actual thread
implementation.
The separation of TDI and TED was a design choice aiming
at separating generic parts of the framework, such as
the TDI, from non-generic parts that depend on the actual
thread implementation, such as the TED. Without the distinction
between these interfaces, the TDI of a thread-aware
debugger would need to be modied each time when support
for a new thread implementation is added. Figure 1(a) depicts
this case where the TDI includes interface components
In for each thread implementation. These interface
components would be required to extract internal information
from the thread implementation and transform them
into a normalized representation. Even if the threads API
was restricted to POSIX threads, as depicted here, the components
of data structures (e.g. pthread t) would dier
from one implementation to the next requiring the interface
components as a mediator. This, in turn, forces a rebuild of
I 2 I n
TDI
Coding
Decoding
Pthreads-
Implementation
I n
Pthreads-
Implementation
I 1
Pthreads-
Implementation
I 2
POSIX Threads API
Threads-
Implementation
Threads-
Implementation
Threads-
Implementation
I 1
I 2 I nTED-Impl. 2
TED-Impl. n
TED-Impl.
TDI
(generic)
Debugger
TED
TED-Access
Debugger
(a) Non-Generic Design (b) Generic Design
Figure
1: Design Options for Encapsulation
the TDI each time support for a new thread implementation
is added.
Figure
1(b) shows better encapsulation chosen for the im-
plementation. The TDI uses a generic interface to the
TED component. The TED provides access to the internals
of a thread implementation, i.e., the TED has an
implementation-dependent part. Since the TED only provides
a thin layer, it reduces the amount of implementation-dependent
code considerable compared to Figure 1(a) where
the TDI is implementation dependent (and the TED is miss-
ing). The TED also provides opportunities to integrate non-standard
thread implementations since the abstraction from
the thread API occurs early while the non-generic approach
requires adherence to a certain thread interface on the TDI
level.
The encapsulation by TDI and TED also provides the means
of active debugging. In active debugging, the application is
enhanced by special routines that may provide and collect
information about the state or perform manipulations on the
executing of an application. This approach facilitates and
speeds up the debugging process. Passive debugging only
probes the application. Instead of extensions for debugging
on the application side, the debugger is enhanced to contain
knowledge about the thread implementation. Table 1
compares active to passive debugging. Generally, debuggers
extract information from an application using a procedural
approach through probing data, even if data may later be
processed within the debugger under a dierent paradigm.
Active debugging allows preprocessing on the application
side to communicate data following an arbitrary paradigm,
e.g., using a declarative paradigm, as given in this paper.
The encapsulation by TDI and TED hides implementation
details of the threads to enhance portability, as discussed
before. In addition, the TDI maintains a database of the
application's state. Queries to the database are performed
in a uniform and extensible query language. Furthermore,
requests for the state of distinct objects from the debugger
can be clustered and are optimized to remove redundan-
cies. As a result, such a declarative query interface performs
better than a procedural interface where each information
request would require a separate action by the debugger.
Post-mortem debugging, i.e., debugging core les of a prematurely
terminated execution of the application, provides
support for debugging threads in the passive case. Active
debugging does not work with post-mortem debugging since
the program is no longer executable. Hence, the TED functionality
cannot be utilized.
3. THREAD EXTENSION FOR DEBUGGING
The objective of the TED layer is to provide uniform access
to implementation-dependent thread structures. Basic
primitives to manipulate sets within the TDI realize a uniform
method to access information. This information can
either be extracted directly from the thread implementation
(if the API supports direct access) or has to be extracted by
extension of the thread implementation for debugging.
For example, a thread within an application has a state similar
to a process: it may be running, ready, blocked or termi-
nated. 1 The Threads API, however, may not provide access
to the internal state of a thread. A non-standard function
1 The implementation actually distinguishes the cause of
blocking. A thread may be blocked on a mutex, a condition
variable, a timer object, due to suspension or for an
unspecied reason (other).
Issue Active Debugging Passive Debugging
details of thread
implementation
not known to de-
bugger
must be known by
debugger
change/add new
thread impl.
no changes of
debugger
debugger must be
enhanced
extract info from
application
declarative approac
procedural approac
query overhead lower, no redun-
dancies
higher, redundant
requests
post-mortem
thread debugging
not possible possible
Table
1: Active vs. Passive Debugging
to access the state of a thread is added in such a case to
provide the required access. The state is then translated to
a standard encoding dened by TED uniform for all thread
implementations.
The TED provides access functions for attributes with a
common signature to simplify and unify access to any internal
data structure.
Functions for reading Sr and writing Sw with domains
dom(TDA of the
domain of objects DO and addresses DA allow arbitrary values
to be associated with objects for later inquiries. 2 The
objects are active or passive entities of the threads imple-
mentation, such as threads, mutex objects and condition
variables. Objects of a common entity can be accessed using
set operations that are either mapped onto the threads
API or onto functions that serve as debugging extensions of
the API and access internal structures. For example, the
set of all threads within an application may not be accessible
through the threads API but there commonly exists an
internal data structure with access functions, which can be
utilized by TED. Mutex objects, on the other hand, are typically
not linked to each other so that the set of mutex objects
has to be maintained on the TED level. For this purpose,
call-outs of the thread implementation to the TED layer
upon object creation provide the means to register these
objects in a common set within TED. These call-outs are
part of the modications to the thread implementation to
ensure debugging support.
TED also supports relations between objects that are created
or revoked when certain events occur. Upon occurrence
of such an event, a call-back from the thread implementation
updates a relation. Let DT ; DM ; DCV be the domains
of threads, mutexes and condition variables, respectively.
Then, the following relations may hold (see Figure 2):
1. OwnedByfThreadID:DT , MutexID:DM g
2. BlockedOn fThreadID:DT , MutexID:DM g
3. WaitFor fThreadID:DT , CondVarID:DCV g
4. SignaledByfCondVarID:DCV , MutexID:DM g
Relations 1 to 3 have a cardinality of 1 : N , i.e., a thread
may own multiple mutexes, multiple threads may be blocked
on the same mutex or may wait for the same condition vari-
able. Relation 4 has a cardinality of
only one mutex may be associated with a condition variable
that threads are blocked on at a time. Other cardinalities
can be supported as well. E.g., MIT Threads has a M : N
model that allows threads to be blocked on the same condition
variable even if they used dierent mutexes before
suspending. Notice that M : N cardinalities would require
2 In the implementation, TDO is substituted by TDA since
objects can be uniquely identied by their addresses. This
simplies the mapping even further.
Address of selfn 1
BlockedOn
SignaledBy
WaitFor
OwnedBy
Address of self
Priority
State
User Function
Thread
Condtion Variable
Address of self
Mutex
Figure
2: Booch Class Diagram of Object Classes
object classes since only scalar attributes are currently support
by the TED domain extensions to the TED
would be required.
The open interface for debugging threads encompasses access
functions for sets to iterate over its members and access
functions for attributes of a member. Table 2 depicts the
iterators and attribute functions and their operational de-
scription. Each interface function is registered with the TDI,
which uses it to build a database of the application's state
as explained in the next section. The domain of values returned
by the iterators and functions is DA and N[fNULLg,
respectively 3 . If a thread implementation supports only a
subset of this functionality, it simply does not register the
function. Invalid requests return NULL. Persistent objects
are discussed in the next section.
4. THREAD DEBUG INTERFACE (TDI)
The objective of the TDI component is an abstraction from
the thread implementation on one side and the debugger
on the other side. The TDI keeps a database of the ap-
plication's state. This approach supports the paradigm of
active debugging. The database maintained by the TDI is
only updated when the application changes its state wrt.
multi-threading objects. The TED may register a set of
operations that will inform the TDI of updates during the
application's execution. Notice that this approach is unique
to active debugging since passive debugging does not allow
application-side execution of auxiliary operations. The TDI
exports the following functions for the registration purpose:
int RegisterObject (RelT Rel, ObjRefT ObjRef);
int DeregisterObject (RelT Rel, ObjRefT ObjRef);
int IsRegistered (RelT Rel, ObjRefT ObjRef);
The signature of these operations includes the relation and
an object of the same type (thread, mutex or condition
variable). If the thread application supports the registra-
3 with the following exceptions: id and rstate have a domain
of Z [ fNULLg, state has a domain of fundef=0,
running, ready, blocked m, blocked c, blocked t, blocked s,
blocked o, exiting g where the dierent blocking states refer
to the cause of blocking: mutex, condition variable, timer,
suspension (forced) and other.
Iterator Description
GetFirstThread,
GetNextThread
get thread from set
GetFirstMutex,
GetNextMutex
get mutex from set
GetFirstCond,
GetNextCond
get condition variable from set
Attribute Description
for threads:
id process-persistent thread-ID
addr address of the thread structure
prio priority
state execution state
rstate implementation-dependent state
entry address of thread's function
address of function's argument
newpc next program counter
sp stack pointer
mbo blocked on this mutex
cvwf blocked on this condition variable
pid ID of process executing the thread
for mutexes:
id process-persistent ID
addr address of mutex structure
owner mutex owner (thread ID)
for condition variables:
id process-persistent ID
addr address of cond var structure
cmutex associated mutex
Table
2: Open Interface for Debugging Threads (It-
erators and Attributes)
tion process, it will invoke the corresponding functions, e.g.,
when locking, unlocking and destroying a mutex. The TED
functionality described in the open interface for debugging
threads is also registered with the TDI. This allows the TDI
to generically invoke TED operations to resolve database
queries. The registration occurs via the following interface:
int SetIterFunc (RelT Rel,ObjRefT (*GetFirst)(),
int SetAttrFunc (RelT Rel, AttrT Attr,
AttrDomainT (*GetFunc) (ObjRefT Obj),
AttrDomainT (*SetFunc) (ObjRefT Obj,
The exchange between TED and TDI about the range of
debugging support is further generalized by letting the TDI
inform the TED upon activation that a number of functions
are expected to be registered. This registration request of
the TDI includes the list of attribute functions, iterators
and registration procedures for all objects. The TED may
then register a subset or all of these functions, depending on
the range of support. This initial exchange only assumes a
known layout of the data types to be registered and serves
the portability of the involved software components.
The TDI handles the communication with the debugger in
such a way that the debugger receives a consistent view of
the multi-threaded objects in the application, which is discussed
in section 7. This abstraction provides the means to
support persistent identiers, as seen in Table 2. A persistent
identier is a unique identier assigned to an object for
its life time. This provision circumvents problems rooted
within thread implementations that recycle object identi-
ers. E.g., the Pthreads API only denes a common interface
including the signature of operations and their types.
A thread object has a certain type but the meaning of the
value is transparent, i.e., a value may refer to a thread object
A as long as it exists. Once A terminates, the value may
be recycled to refer to thread B. In a passive debugging ap-
proach, threads A and B cannot be distinguished explicitly,
it would be the user's responsibility to detect A's termination
and infer that another thread with the same identier
truly refers to B. By active debugging, the TDI receives notice
of the creation and termination of a thread through the
registration procedure. This allows the TDI to assign its
own values to identify objects and use these values to communicate
with the debugger. The user is only exposed to
the debugger, which provides the persistent identiers and
allows a distinction between threads A and B for active de-bugging
The actual queries for the database are issued in a uniform
and extensible query language. Queries are dened according
to a specication of a relational algebra. Each query is
preceded by a mode that either refers to a TED query or a
user-dened extension with its own operational framework.
Queries can be selections and projections limiting the set
of objects in question and dening the requested attributes,
respectively. Each query includes a set of selections of
a relation with values or
projections of a relation with assignments.
The queries are resolved by a list of values corresponding to
the projections in the query or by an error message. The
results are a set of answers reduced to ensure that no duplicates
are contained within the set. Furthermore, requests
for the state of distinct objects can be clustered in one query
and are optimized to remove redundancies. As a result, this
declarative query interface performs better than a procedural
interface where each request would require a separate
function call by the debugger. An example is given in Section
7.
5. COMMUNICATION STRUCTURE
Breakpoint debugging is generally supported by a service
of the operating system. This service, e.g., the system call
ptrace under UNIX, provides access to the trace of a process
as depicted in Figure 3. The debugging process can peek or
poke one word of the application process at a time. It may
also continue in the execution of the application. The performance
of the debugger is often constrained by the granularity
of its data accesses, which will be quantied in Section
7. When the approach of active debugging is utilized, large
amounts of data may be exchanged between the debugger
and the application rendering the ptrace approach less ef-
cient. The responses of the TDI for a query issued by the
debugger may contain large amounts of data depending on
the number of active multi-threaded objects in the applica-
tion. Although queries are often much shorter, a symmetric
approach was chosen. The queries issued by the debugger
as well as the responses from the TDI are transmitted using
inter-process communication (IPC).
call
IPC-Channel
(buffered)
Debugger
Target Process
call
call
subsequent handler calls
IPC.write(<TED-Request>)
Request
Phase
Response
Phase
return (Result #1)
call
return
call
return
call
return
call
return (Result #n)
return
call
return
return
Handling
Request
Process
Target
Replicated
exit
IPC.write(<TED-Request>)
(Result #1)
(Result #2)
(Result #n)
Response
Phase
Request
Phase
call
return
call
return
call
return
return
call
return
call
return
call
return
call
return
call
call
return
Target Process IPC Channel Debugger
(buffered)
(a) mutual exclusive execution (b) parallel execution
Figure
4: Communication between Debugger and Application
Another problem is posed by the fact that the application
process is stopped while the debugger is active (and vice
versa). The debugger can only make progress when its
queries are handled by the TDI, which is part of the applica-
tion. This problem is solved by letting the debugger issue a
call to a handler function within the application. A ptrace
call for continuation activates the server side of the TDI,
which receives the request, resolves the query and initiates
the response. The response may contain a large amount of
data that cannot be transfered in one buer since the buer
length is generally constrained by the IPC mechanism. The
debugger could issue repeated ptrace calls to receive one
packet at a time but this would result in a large number of
context switches of the debugger and the application process
as a side eect of using ptrace (see Figure 4(a)). Instead,
the application process forks a child upon long responses
read/write
Data
call
return
call
return
call
return
Time
blocked
stopped
active
Application Operating System Debugger
trap, signal
continue
wait
return
call
Figure
3: Breakpoint Debugging with Ptrace
(see
Figure
4(b)). The child receives the responsibility to
ll the IPC buers before terminating while the debugger
can receive the IPC packets in parallel. 4
6. DEBUGGER EXTENSIONS
The debugger was extended in two respects. First, the IPC
interface to the TDI was added. Second, new user commands
to control the debugging process were included and
their resolution was handed o to the TDI. The IPC extensions
are bundled in one module, the TDI client, and can
be bound with the debugger during the build process. The
TDI client handles the client side of the IPC communica-
tion. The debugger may invoke send and receive functions
of the TDI client to send a query and receive the response,
in both cases as a string. After sending a query the application
is continued (ptrace call) within the TDI-Server,
the main function on the application side of the TDI. The
TDI server evaluates the query, hands it o to the parser
of the query language, which may update the state of the
database using the TED interface. Once the response has
been formatted, it is returned using IPC and the debugger
can act upon the result. The second extension of the debugger
denes a number of new user commands and their
actions. This additional functionality is detailed in Section
8.
7. IMPLEMENTATION
The implementation comprises changes to the debugger and
the threads implementation. The Gnu debugger GDB 4.18
was chosen for this purpose since the sources are available,
it is widely used and actively maintained [19]. The chosen
thread implementations range from kernel threads (Linux-
Threads) [11] over mixed threads (Solaris) [16] to user-level
threads (FSU and MIT Pthreads) [14, 17].
One of the challenges of active debugging is posed by the
interaction between the activation of debugging operations
4 Even on a uniprocessor, the child process and the debugger
may run concurrently and do not require context switches
for each packet anymore.
within the application and the regular execution of the application
itself. The TDI server may be a separate thread
for kernel threads while the server may simply be invoked in
the context of the active thread for user-level threads. But
this approach may result in scheduling actions due to
1. a skew of the consumed execution time of the TDI,
2. event notication or
3. calls to library functions that use synchronization.
When round-robin scheduling is active, additional execution
time consumed by the TDI server may cause a context
switch of the current thread. If the switch occurs before
the TDI server nishes, the results obtained for debugging
may be inconsistent. One part of the results may originate
before the context switch and another part after the
switch subject to a modied thread state since application
threads had been active meanwhile. This problem will not
only occur upon timer expiration but may also be caused
by other signals. Context switches may also be caused by
synchronization, in particular when the TDI server calls a
library function whose entry is protected by a mutex. The
mutex may already be locked by a thread in the application
resulting in a context switch from the TDI server to the application
thread. Even worse, a deadlock may occur if the
application thread is the same thread that executes the TDI
server.
First, the problem of calling library functions that contain
synchronization was addressed by providing TDI-specic
replacements for heap allocation and string manipulation.
Other functions used by the TDI server do not contain potentially
blocking library calls.
Second, the problem of signal handling during TDI activation
shall be discussed. One solution would be to mask
signals in the application for a limited time. But masking
could only be accomplished by the application itself, which
causes a race: A signal may arrive while the TDI tries to
mask signals so that the TDI would loose control and other
threads may be scheduled. The race can be avoided if the
debugger forced the masking of signals for the application
but most operating systems only provide such an interface
for the current process and not for another process. Instead
of an operating system interface, the thread implementation
was enhanced to provide a
ag that, when set, collects signals
for later handling as depicted in Figure 5. The debugger
uses the ptrace call to set the
ag in the application (1).
Incoming signals are collected but their handling is postponed
during TDI activation. Once the TDI activities are
complete, the debugger reads the collected signals (2), clears
the
ag and collected signals and sends each signal to the
application (3). 5
The implementation of the TDI server contains a communication
subsystem, a query parser and a query evaluator.
5 An alternative to re-issuing the signals would be to add
them to the pending signals of the thread implementation
and force a check on pending signals when resuming the ap-
plication, which would have the advantage that signal contexts
were preserved. Future work may include such a provision
The communication structure was implemented via shared
memory IPC between processes. The performance was evaluated
by comparing the IPC variant using a page size of
8kB with a ptrace implementation using a
both under Linux 2.0.36 on a 150MHz Pentium with FSU
Pthreads. Figure 6 shows that the response time for ptrace
is ve times higher than the performance for IPC. The results
underline the advantages of the IPC approach for the
TDI communication.
The query parser was generated from lexical and syntactical
specications by the generators Flex and Bison, respectively.
The parser reports errors for illegal queries or transforms
legal ones into a tuple representation, which is then fed to
the query evaluator. The evaluator may optimize the query,
invoke TED functions to resolve the query and compile a
response. Examples for a query may be as follows:
thread:id,entry,state:state
thread:id,prio=10,state: (prio+10<20) && cvwf ! =0x10 (2)
Query (1) requests the identier, state and function of all
threads that are running or not blocked on a mutex. Query
(2) requests the same information (except for the function)
for threads whose priority plus 10 is less than 20 and who
are not blocked on a condition variable (second conjunct).
pthread_debug_TDI_ignored_signals
Scheduling1Timer
Feedback
pthread_debug_TDI_sig_ignore
off on Record
Debugger
POSIX Threads
kill(pid, SIGALRM)
ptrace(POKE.)
ptrace(PEEK.)
Figure
5: Signal Handling during Active Debugging2.57.512.517.50 250 500 750 1000 1250 1500
response
time
instantiated threads
TDI
Figure
Response Times: IPC vs. Ptrace
Query (2) also sets the priority of the selected thread to 10.
A response to the debugger may be as follows:
The debugger interprets each of the three tuples as (iden-
tier, function address, state) and translates addresses and
states into symbolic names for its output.
The TDI server is combined with the application by dynamically
linking the application with the TDI server relocatable
library. This has the advantage that an application
compiled for testing and debugging only has to be linked
with the dynamic linker library using -ldl. The TDI will
not be invoked or even linked when the application is executed
outside the debugger. Once the debugger is invoked,
it checks if the threads of the application contain a symbol
to indicate debugging support for threads. If the the
ag is found, it will be set by the debugger and results in
dynamic binding and invocation of the TDI server library
during the initialization phase. The debugger then sends a
pthread TDI register message to the TDI server. The TDI
presents the TED with the set of functions it expects, and
the TED responds with the registration of attribute and iteration
functions. Afterwards, the TDI may resolve queries
by referencing thread objects through TED functions.
Thread-specic breakpoint debugging also requires changes
to the debugger. In GDB, the routine proceed calling normal
stop, wait for inferior and resume control the trap handling
for breakpoints. A command to resume execution activates
the application. The debugger then waits for the inferior
process (application) to hit a trap instruction. When
the trap is hit, the debugger resumes control and cleans
up its traces from the application's code in normal stop.
Thread-specic breakpoints modify this sequence by checking
upon resumption of the debugger after wait if the break-point
reached corresponded to the requested thread. If the
thread identiers match, normal stop is called. Otherwise,
the breakpoint is reset similar to the cleanup performed in
normal stop and resume is called again. The cleanup is depicted
in Figure 7. When the application traps, the inserted
trap instruction has to be replaced by the original instruction
A of the application to ensure the correct semantics.
Now, B has been replaced by a trap to transfer control to
the debugger for resuming execution (2). Then, the trap is
replaced by B while A is replaced by a trap to make sure that
the application halts at the same breakpoint again the next
time around. The execution in step (2) may, however, cause
a scheduling action if a signal was received. This is prevented
by disabling signals during step (2) using the facilities discussed
before. Finally, the thread identier of the active
thread has to be determined at a breakpoint. For user-level
threads, it suces to search for the single running thread using
a TDI query. For kernel threads, multiple threads may
be running (on a multi-processor) but the system call wait
returns information about the process that caused a trap.
For mixed threads, the low-level scheduling entity has to be
identied and it has to be ensured that parallel execution
of a trap by dierent threads of one application results in
serial notication of the debugger (on the operating system
level). For example, Solaris maps POSIX threads onto light-weight
processes (LWPs) whose status information would
have to be checked upon encountering a trap. This requires
that the LWP and its state for a POSIX thread is deter-
mined, for example, through the /proc le system. Single
step commands, such as ptstep, ptnext (see next section),
use a similar technique to only count steps executed by the
current thread. Attaching and detaching threads also has
a similar eect but includes thread-specic breakpoints for
the program counter of the target thread. A breakpoint on
the next context switch is realized by setting conditional
breakpoints at the program counter of all threads except for
the one that has trapped. Once such a breakpoint is hit, all
other conditional breakpoints set before have to be deleted.
Forcing a change in the scheduling pattern results in a TDI
query that sets thread attributes and invokes a TED function
to aect the scheduler of the thread implementation. A
forced suspension also requires that the debugger signal the
application. This ensures that the scheduler is invoked to
dispatch the next thread eligible to run. This can also be
achieved by adding a scheduler signal to the set of collected
signals during signal masking, as discussed before.
8. THREAD-AWARE DEBUGGING
This sections describes commands that have been added or
modied to make GDB thread aware and provide extended
debugging support for multi-threaded applications. Notice
that GDB already provides limited debugging support for
selected thread implementations. The new commands for
debugging threads have been chosen to coexist with the existing
functionality. For example, info threads may already
list the threads for Solaris, Mach and LinuxThreads. The
new command info pthreads lists threads for any application
supporting the TDI/TED facilities and includes extensive
information about the state of each thread, the object
it may be blocked on, priorities etc. Both commands are
available at the same time.
pc
pc
Inst. B
Inst. C
pc
Code Segment
Inst. C
Code Segment
Inst. A3
Breakpoint a[n] hit
Resume
trap
Resume Execution
Ready to
Code Segment
Inst. C
Inst. B
trap
Code Segment
Inst. A
Inst. C
trap
trap Reset
Reset
Figure
7: Resetting a Breakpoint
info pthreads lists the set of threads that have not
terminated yet including the attributes for threads depicted
in Table 2.
info pmutex lists the set of initialized mutexes with
the attributes of Table 2.
info pcond lists the set of initialized condition variables
with the attributes of Table 2.
break pthread <Thread ID> sets a
thread-specic breakpoint for <Thread ID>at <loca-
tion>
ptattach <Thread ID> stops <Thread ID>the
next time it is scheduled at the rst possible location
and transfers control to the debugger. Once issued, all
subsequent breakpoint commands (break, next, step)
are thread-specic. This means that these breakpoints
only apply to the attached thread while other active
threads will not stop at these breakpoints.
ptdetach reverses a ptattach and makes breakpoints
applicable to all threads again.
ptstack <Thread ID> prints the call stack of
<Thread ID>.
continue -cs continues the execution until a break-point
is hit or a context switch occurs, which ever
comes rst. In the latter case, the identier of the
new thread is printed.
ptnext/ptnexti/ptstep/ptstepi [n] issues n next or
step instructions for the current thread and ignores
other instructions executed by concurrently running
threads.
These facilities go beyond traditional debugging support for
threads in the following sense. The cause of blocking of
threads and the blocking object can be identied. This may
allow the user to identify deadlocks when circular dependencies
between synchronization objects and threads are de-
picted. 6 It provides the user with the call stack of threads,
i.e., the user can follow the progress of concurrent execu-
tions. Thread-specic breakpoint debugging simplies the
user's task of tracing the execution of selected threads. Interactions
with other threads can be detected by notication
upon context switches. Finally, scheduling actions forced by
the user allow selected activation and suspension to disable
or force thread interactions, test their impacts and possibly
track down problems between these interactions.
We also assessed the overhead of the active debugging support
through TDI and TED. On the application side, overhead
may be incurred by the TED. Two Splash-2 benchmarks
[24] were measured (processes emulated by FSU
Pthreads) on a Pentium II 350 MHz under Linux 2.2.14, as
depicted in Figure 6. Fft performed calculations for 2 20 data
6 It may seem that automatic deadlock detection could be
easily incorporated. This is true in the sense that the state
of threads may not change in the absence of signals. The
signal semantics of Pthreads does not provide such a stable
state since signals may interrupt a synchronization request
and then skip out of the synchronization call from the signal
handler. For this reason, automatic deadlock detection has
not been implemented.
barnes
Table
3: Performance Overhead of Active Debug-
ging
points. The numbers for Fft exclude initialization. Barnes
used the standard parameters (except for 5 leaves), and
the numbers reported represent the computation time, only.
During the experiments, the number of processes (threads)
was varied between 2 and 128 but this had no eect on the
measurements. The overhead represents the portion of the
second measurements that were due to active debugging.
This overhead depends on the characteristics of the appli-
cation, e.g., t uses less synchronization than barnes, which
explains the lower overhead of the former. On the debugger
side, the overhead of the TDI and of queries are not noticeable
to the user, i.e., the response time of TDI queries equals
that of any other debugger interaction. However, if a large
database is gradually built (thousands of threads, mutexes,
etc.) then the response time of queries may be aected since
all entries may be probed. We did not experience this problem
in practice. Hence, relational queries seem suitable for
active debugging.
9. RELATED WORK
McDowell and Helmbold present an overview of the problems
and solutions for debugging concurrent programs [13].
Ceswell and Black [5] describe a debugger for Mach threads
with thread-specic breakpoints and forced scheduling ac-
tions. The approach is limited to kernel threads, uses a
non-standard ptrace interface and is not as portable as the
approach described in this paper. Ponamgi et al. [15] continue
their work with this debugger by adding event handling
to detect deadlocks, livelocks and multiple entry to
critical sections. Similar support could be added to our work
at the level of the TDI but is subject to the constraints described
before, i.e., certain thread standards may not allow
deadlock detection due to signal handling. SmartGDB uses
the non-generic design of Figure 1(a) where for each thread
implementation the debugger has to be modied. Changes
in a thread implementation may, in turn, require changes
in the debugger. GDB 4.18 [19] requires even more modications
than SmartGDB for each thread implementation.
SmartGDB and GDB 4.18 support only a subset of our functionality
for debugging threads and still use the slow ptrace
call for communication. Solaris utilizes the /proc le system
to eciently access internal structures of the application,
which is an alternative to the communication used by TDI
in terms of eciency but provides neither portability for systems
without /proc le system nor does it allow a generic
encapsulation as seen in Figure 1(b) with its localization of
implementation-dependent extensions. Solaris also provides
a library for debugging threads with similar functionality
as the TED interface but lacks the
exibility of the TDI,
which makes our approach portable. Wismuller et al. [23]
describe a tool set for debugging parallel programs consisting
of a debugger (Partop) and a monitoring tool. Partop
supports thread-aware debugging and uses the event-action
paradigm that executes a certain action when an event oc-
curs, e.g., when a thread is created. A generalization of the
event-action paradigm is provided by path expressions and
path actions in the context of debugging [3]. In this work,
path expressions were proposed at the user level, which few
debuggers support today. Our work does, on one hand, uses
similar concepts under the paradigm of active debugging.
On the other hand, these concepts are utilized for internal
purposes rather than at the user level, i.e., as a means of
communication between debugger components in a portable
fashion. The high performance debugging forum (HPDF)
specied a command interface for parallel debuggers [9] including
thread-aware debugging. Our work does not specify
a user interface but rather an interface to a thread library
that may be utilized by a debugger. We also implement the
thread-aware functionality of the HPDF interface and even
go beyond these requirements, e.g., by supplying additional
functionality to display synchronization data or execute up
to the next breakpoint. Cownie and Gropp [6] propose a
debugger interface to display messages within MPI implementations
and demonstrate this work in TotalView. Independent
from our work, they also conclude that dynamic
linking of shared libraries represents the most
exible way
to provide debugging facilities for multiple runtime libraries.
Panorama [12] is a parallel debugger for MIMD architectures
that relies on text-based debuggers to collect information
and visualize it in a predened or a user-dened representa-
tion. Panorama's portability is given by its reliance on text-based
debuggers at a lower level. Our work diers in that
we provide such a text-based debugger extension that may
be used by a visualization debugger like Panorama. Kessler
[10] introduced fast breakpoints, which can be regarded as
a variant on active debugging. Conditional breakpoints are
realized by replacing the trap with a call to a debugging-
specic handler in the application that checks the condition
of the breakpoint and only traps if it evaluates to true,
thereby improving performance. We use active debugging
to resolve relational queries. KDB [4] supports two-level
debugging of user and kernel threads with a unique design.
Each kernel thread is controlled separately by a local debugger
using the ptrace interface and the /proc le system. A
main debugger interacts with the user and steers all local
debuggers. Our work diers in that we require a TED interface
for each thread level. Snodgrass [18] utilizes relational
queries for monitoring by extending databases to keep histories
of traces and providing a temporal operator to query
these histories. Our relational queries involve on-line debuggers
accessing computing states without histories rather
than monitoring data. Overall, none of these tools use active
debugging or relational queries for debugging threads nor do
they support as much functionality as the work presented in
this paper combined with portability at the same time.
10. CONCLUSION
This paper proposes an open interface for debugging as an
extension to thread implementations. In addition, extensions
for thread-aware debugging are identied and implemented
within the Gnu Debugger to provide additional features
beyond the scope of existing debuggers. The work is
based on the paradigm of active debugging that includes
a language-independent protocol to communicate between
debugger and application via relational queries to ensure
that the enhancements of the debugger are independent of
actual thread implementations. Partial or complete implementations
of the interface for debugging can be added to
thread implementations to work in unison with the enhanced
debugger without any modications to the debugger itself.
Sample implementations of the interface for debugging have
shown its adequacy for user-level threads, kernel threads and
mixed thread implementations while providing extended de-bugging
functionality at improved eciency and portability
at the same time.
Availability
The modied debugger GDB-TDI (sources and bi-
naries) and its documentation are available at
http://www.informatik.hu-berlin.de/mueller/TDI under
the Gnu Public License.
11.
--R
Generalized path expressions: A high level debugging mechanism.
KDB: A multi-threaded debugger for multi-threaded applications
Implementing a Mach debugger for multithreaded applications.
A standard interface for debugger access to message queue information in MPI.
Testing large
Beyond multiprocessing
Command interface for parallel debuggers.
Fast breakpoints.
The linuxthreads library.
Retargetability and extensibility in a parallel debugger.
Debugging concurrent programs.
A library implementation of POSIX threads under UNIX.
Debugging multithreaded programs with MPD.
SunOS multi-thread architecture
Mit pthreads.
A relational approach to monitoring complex systems.
GDB manual (the GNU source-level debugger)
Implementing lightweight threads.
Technical Committee on Operating Systems and Application Environments of the IEEE.
MACH threads and the UNIX kernel: The battle for control.
The SPLASH-2 programs: Characteriation and methodological considerations
--TR
A relational approach to monitoring complex systems
Debugging concurrent programs
Fast breakpoints: design and implementation
The SPLASH-2 programs
Retargetability and extensibility in a parallel debugger
Interactive debugging and performance analysis of massively parallel applications
KDB
Debugging Multithreaded Programs with MPD
A Standard Interface for Debugger Access to Message Queue Information in MPI
Generalized path expressions
--CTR
Jaydeep Marathe , Frank Mueller , Tushar Mohan , Bronis R. de Supinski , Sally A. McKee , Andy Yoo, METRIC: tracking down inefficiencies in the memory hierarchy via binary rewriting, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California | concurrency;open interface;debugging;active debugging;threads |
349154 | Upper and Lower Bounds on the Learning Curve for Gaussian Processes. | In this paper we introduce and illustrate non-trivial upper and lower bounds on the learning curves for one-dimensional Guassian Processes. The analysis is carried out emphasising the effects induced on the bounds by the smoothness of the random process described by the Modified Bessel and the Squared Exponential covariance functions. We present an explanation of the early, linearly-decreasing behavior of the learning curves and the bounds as well as a study of the asymptotic behavior of the curves. The effects of the noise level and the lengthscale on the tightness of the bounds are also discussed. | Introduction
A fundamental problem for systems learning from examples is to estimate
the amount of training samples needed to guarantee satisfactory generalisation
capabilities on new data. This is of theoretical interest but also of vital
practical importance; for example, algorithms which learn from data should
not be used in safety-critical systems until a reasonable understanding of
their generalisation capabilities has been obtained. In recent years several
authors have carried out analysis on this issue and the results presented
depend on the theoretical formalisation of the learning problem.
Approaches to the analysis of generalisation include those based on
asymptotic expansions around optimal parameter values (e.g. AIC (Akaike,
1974), NIC (Murata et al., 1994)); the Probably Approximately Correct
convergence approaches (e.g.
Vapnik, 1995); and Bayesian methods.
The PAC and uniform convergence methods are concerned with frequentist-
style confidence intervals derived from randomness introduced with respect
to the distribution of inputs and noise on the target function. A central
concern in these results is to identify the flexibility of the hypothesis class F
to which approximating functions belong, for example, through the Vapnik-Chervonenkis
dimension of F . Note that these bounds are independent
of the input and noise densities, assuming only that the training and test
samples are drawn from the same distribution.
The problem of understanding the generalisation capability of systems
can also be addressed in a Bayesian framework, where the fundamental
assumption concerns the kinds of function our system is required to model.
In other words, from a Bayesian perspective we need to put priors over
target functions. In this context learning curves and their bounds can be
analysed by an average over the probability distribution of the functions. In
this paper we use Gaussian priors over functions which have the advantage
of being more general than simple linear regression priors, but they are
more analytically tractable than priors over functions obtained from neural
networks.
Neal (1996) has shown that for fixed hyperparameters, a large class of
neural network models will converge to Gaussian process priors over functions
in the limit of an infinite number of hidden units. The hyperparameters
of the Bayesian neural network define the parameters of the corresponding
Gaussian Process (GP). Williams (1997) calculated the covariance functions
of GPs corresponding to neural networks with certain weight priors
and transfer functions.
The investigation of GP predictors is motivated by the results of Rasmussen
(1996), who compared the performances obtained by GPs to those obtained
by Bayesian neural networks on a range of tasks. He concluded that GPs
were at least as good as neural networks. Although the present study deals
with regression problems, GPs have also been applied to classification problems
(e.g. Barber and Williams, 1997).
In this paper we are mainly concerned with the analysis of upper and
lower bounds on the learning curve of GPs. A plot of the expected generalisation
error against the number of training samples n is known as a learning
curve. There are many results available concerning leaning curves under different
theoretical scenarios. However, many of these are concerned with the
asymptotic behaviour of these curves, which is not usually of great practical
importance as it is unlikely that we will have enough data to reach the
asymptotic regime. Our main goal is to explain some of the early behaviour
of learning curves for Gaussian processes.
The structure of the paper is as follows. GPs for regression problems
are introduced in Section 2. As will be shown, the whole theory of GPs is
based on the choice of the prior covariance function C p (x; x 0 in Section
3 we present the covariance functions we have been using in this study.
In Section 4 the learning curve of a GP is introduced. We present some
properties of the learning curve of GPs as well as some problems may arise
in evaluating it. Upper and lower bounds on the learning curve of a GP in a
non-asymptotic regime are presented in Section 5. These bounds have been
derived from two different approaches: one makes use of main properties of
the generalisation error, whereas the other is derived from an eigenfunction
decomposition of the covariance function. The asymptotic behaviour of the
upper bounds is also discussed.
A set of experiments have been run in order to assess the upper and lower
bounds of the learning curve. In Section 6 we present the results obtained
and investigate the link between tightness of the bounds and the smoothness
of the stochastic process modelled by a GP. A summary of the results and
some open questions are presented in the last Section.
Gaussian Processes
A collection of random variables fY (x) jx 2 Xg indexed by a set X defines a
stochastic process. In general the domain X might be R d for some dimension
d although it could be even more general. A joint distribution characterising
the statistics of the random variables gives a complete description of the
stochastic process.
A GP is a stochastic process whose joint distribution is Gaussian; it is
fully defined by giving a Gaussian prior distribution for every finite subset
of variables.
In the following we concentrate to the regression problem assuming that
the value of the target function t (x) is generated from an underlying function
y (x) corrupted by Gaussian noise with mean 0 and variance oe 2
. Given a
collection of n training data D
(where each t i is the
observed output value at the input point x i ), we would like to determine
the posterior probability distribution p (yjx; D n ).
In order to set up a statistical model of the stochastic process, the set
of n random variables
modelling the function values
at respectively, is introduced. Similarly t is the collection of
target values
denote the set of training inputs
We also denote with ~ y the vector whose components are
y and the test value y at the point x. The distribution p (~ yjx; D n ) can
be inferred using Bayes' theorem. In order to do so, we need to specify a
prior over functions as well as evaluate the likelihood of the model and the
evidence for the data.
A choice for a prior distribution of the stochastic vector ~ y is a Gaussian
prior distribution:
\Gamma2
~
y
This is a prior as it describes the distribution of the true underlying values
without any reference to the target values t. The covariance matrix \Sigma can
be partitioned as
The element (K
is the covariance between the i-th and the j-th training
points, i.e. (K
y
y
. The components
of the vector k (x) are the covariances of the test point with all the
training data
is the covariance of the test
point with itself.
A GP is fully specified by its mean E [y covariance function
Below we set -
this is a valid assumption provided that any known offset or trend in the
data has been removed. We can also deal with - (x) 6= 0, but this introduces
some extra notational complexity. A discussion about the possible choices
of the covariance function C p (x; x 0 ) is given in Section 3. For the moment
we note that the covariance function is assumed to depend upon the input
variables (x; x 0 ). Thus the correlation between function values depends
upon the spatial position of the input vectors; usually this will be chosen so
that the closer the input vectors, the higher the correlation of the function
values.
The likelihood relates the underlying values of the function to the target
data. Assuming a Gaussian noise corrupting the data, we can write the
likelihood as
I. The likelihood refers to the stochastic variables representing
the data; so t; y 2 R n
and\Omega is an n \Theta n matrix.
Given the prior distribution over the values of the function p
Bayes' rule specifies the distribution p (~ yjx; D n ) in terms of the likelihood
of the model p (tjy) and the evidence of the data p (D n ) as
Given such assumptions, it is a standard result (e.g. Whittle, 1963) to derive
the analytic form of the predictive distribution marginalising over y. The
predictive distribution turns out to be y (x) - N
where the
mean and the variance of the Gaussian function are
The most probable value -
y (x) is regarded as the prediction of the GP on the
test point x; K is the covariance matrix of the targets t:
I. The
estimate of the variance oe 2
(x) of the posterior distribution is considered
as the error bar of -
y (x). In the following, we always omit the subscript - y in
, taking it as understood. Since the estimate 1 is a linear combination
of the training targets, GPs are regarded as linear smoother (Hastie and
Tibshirani, 1990).
3 Covariance functions
The choice of the covariance function is a crucial one. The properties of
two GPs, which differ only in the choice of the covariance function, can be
remarkably diverse. This is due to the r-ole of the covariance function which
has to incorporate in the statistical model the prior belief about the underlying
function. In other words the covariance function is the analytical
expression of the prior knowledge about the function being modelled. A misspecified
covariance function affects the model inference as it has influence
on the evaluation of Equations 1 and 2.
Formally every function which produces a symmetric, positive semi-definite
covariance matrix K for any set of the input space X can be chosen
as covariance function. From an applicative point of view we are interested
only in functions which contain information about the structure of the underlying
process being modelled.
The choice of the covariance function is linked to the a priori knowledge
about the smoothness of the function y (x) through the connection between
the differentiability of the covariance function and the mean-square differentiability
of the process. The relation between smoothness of a process
and its covariance function is given by the following theorem (see e.g. Adler,
exists and is finite at (x; x), then the stochastic
process y (x) is mean square differentiable in the i-th Cartesian direction at
x. This theorem is relevant as it links the differentiability properties of the
covariance function with the smoothness of the random process and justifies
the choice of a covariance function depending upon the prior belief about
the degree of smoothness of y (x).
In this work we are mainly concerned with stationary covariance func-
tions. A stationary covariance function is translation invariant (i.e. C p (x; x
depends only upon the distance between two data points.
In the following, the covariance functions we have been using are presented.
In order to simplify the notation, we consider the case
The stationary covariance function squared exponential (SE) is defined
as
where - is the lengthscale of the process. The parameter - defines the
characteristic length of the process, estimating the distance in the input
space in which the function y (x) is expected to vary significantly. A large
value of - indicates that the function is almost constant over the input space,
whereas a small value of the lengthscale designates a function which varies
rapidly. The graph of this covariance function is shown by the continuous
line in Figure 1. As the SE function has infinitely many derivatives it gives
rise to smooth random processes (y (x) posses mean-square differentiability
up to order 1).
It is possible to tune the differentiability of a process, introducing the
modified Bessel covariance function of order k (MB k ). It is defined as
a i
exp
where K - (\Delta) is the modified Bessel function of order - (see e.g. Equation
8:468 in Gradshteyn and Ryzhik, 1993), with
Below we set the constant - such that C p 1. The factors a k are
constants depending on the order - of the Bessel function. Mat'ern (1980)
shows that the functions MB k define a proper covariance. Stein (1989) also
noted that the process with covariance function MB k is
differentiable.
In this study we deal with modified Bessel covariance function of orders
We note that MB 1 corresponds to the Ornstein-Uhlenbeck covariance function
which describes a process which is not mean square differentiable.
If k !1, the MB k behaves like the SE covariance function; this can be
easily shown by considering the power spectra of MB k and SE which are
and S se (!) / - exp
Since
lim
the MB k behaves like SE for large k, provided that - is rescaled accordingly.
Modified Bessel covariance functions are also interesting because they
describe Markov processes of order k. Ihara (1991) defines Y (x) to be a
strict sense Markov process of order k if it is differentiable
at every x 2 R and if P (Y
states that a Gaussian process is a
1 Note that the definition of a Markov process in discrete and continuous time is rather
different. In discrete time, a Markov process of order k depends only on the previous
k times, but in continuous time the dependence is on the derivatives at the last time.
However, function values at previous times clearly allow approximate computation of
Markov process of order k in the strict sense if and only if it is an autoregressive
model of order k (AR(k)) with a power spectrum (in the Fourier
domain) of the form
Y
As the power spectrum of MB k has the same form of the power spectrum of
an AR(k) model, the stochastic process whose covariance function is MB k
is a strict sense k-ple Markov process. This characteristic of the MB k covariance
functions is important as it ultimately affects the evaluation of the
generalisation error (as we shall see in Section 6).
Figure
2 shows the graphs of four (discretised) random functions generated
using the MB k covariance functions (with and the SE func-
tion. We note how the smoothness of the random function specified is dependent
of the choice of the covariance function. In particular, the roughest
function is generated by the Ornstein-Uhlenbeck covariance function (Figure
whereas the smoothest one is produced by the SE (Figure 2(d)). An intermediate
level of regularity characterises the functions of Figures 2(b) and
2(c), corresponding to MB 2 and MB 3 respectively. Note that the number
of zero-level upcrossings in [0; 1] (denoted N u ) is only weakly dependent on
the order of the process. For MB 2 and MB 3 E[N u
derivatives (e.g. via finite differences) and thus one would expect that in the continuous-time
situation the previous k process values will contain most of the information needed
for prediction at the next time. Note that for the Ornstein-Uhlenbeck process Y
depends only on the previous observation Y (t).
and (
respectively (see Papoulis (1991) eqn 16-7 for details). For
the SE process E[N u As the Ornstein-Uhlenbeck process is
non-differentiable, the formula given for E[N u ] cannot be applied in this
case.
Learning curve for Gaussian processes
A learning curve of a model is a function which relates the generalisation
error to the amount of training data; it is independent of the test points as
well as the locations of the training data and depends only upon the amount
of data in the training set. The learning curve for a GP is evaluated from
the estimation of the generalisation error averaged over the distribution of
the training and test data.
For regression problems, a measure of the generalisation capabilities of
a GP is the squared difference E g
between the target value on a test
point x and the prediction made by using Equation 1:
The Bayesian generalisation error at a point x is defined as the expectation
of
Dn (x; t) over the actual distribution of the stochastic process t:
. Under the assumption that the data set is actually
generated from a GP, it is possible to read Equation 2 as the Bayesian
generalisation error at x given training data D n . To see this, let us consider
the (n 1)-dimensional distribution of the target values at x 1
x. This is a zero-mean multivariate Gaussian. The prediction at the test
point x is -
I. Hence the expected
generalisation error at x is given by
\Theta
\Theta
\Theta tt T
where we have used
\Theta tt T
K. Equation 5 is identical
to oe 2
(x) as given in Equation 2 with the addition of the noise variance oe 2
(since we are dealing with noisy data). The variance of
can also be calculated (Vivarelli, 1998).
The covariance matrix pertinent for these calculations is the true prior;
if a GP predictor with a different (incorrect) covariance function is used, the
expression for the generalisation error becomes
c
where the indices c and i denote the correct and incorrect covariance functions
respectively. It can be shown (Vivarelli, 1998) that this is always larger
than Equation 5.
Another property of the generalisation error can be derived from the
following observation: adding more data points never increases the size of
the error bars on prediction (oe 2
n (x)). This can be proved using
standard results on the conditioning of a multivariate Gaussian (see
Vivarelli, 1998). It can also be understood by the information theoretic
argument that conditioning on additional variables never increases the entropy
of a random variable. Considering t (x) to be the random variable,
we observe that its distribution is Gaussian, with variance independent of
t (although the mean does depend on t). The entropy of a Gaussian is2 log
. As log is monotonic, the assertion is proved. This argument
is an extension of that in (Qazaz et al., 1997), where the inequality
was derived for generalized linear regression.
Dn (x), a similar inequality applies also to the Bayesian
generalisation errors and hence
This remark will be applied in Section 5 for evaluating upper bounds on the
learning curve.
Equation 5 calculates the generalisation error at a point x. Averaging
Dn (x) over the density distribution of the test points p (x), the expected
generalisation error E
Dn is
For particular choices of p (x) and C p (x) the computation of this expression
can be reduced to a n \Theta n matrix computation as E x
\Theta
\Theta k (x) k T (x)
. We also note that Equation 7 is independent
of the test point x but still depends upon the choice of the training data
D n . In order to obtain a proper learning curve for GP, E g
Dn needs to be
averaged 2 over the possible choices of the training data D n . However, it is
very difficult to obtain the analytical form of E g for a GP as a function of
n. Because of the presence of the k T Equation 5, the
matrix K and vector k (x) depend on the location of the training points:
the calculations of the averages with respect to the data points seems very
hard. This motivates looking for upper and lower bounds on the learning
curve for GP.
5 Bounds on the learning curve
For the noiseless case, a lower bound on the generalisation error after n
observations is due to Michelli and Wahba (1981). Let be the
ordered eigenvalues of the covariance function on some domain of the input
space X . They showed that E g (n) -
a bound on the learning curve for the noisy case; since the bound uses
observations consisting of projections of the random function onto the first
eigenfunctions, it is not expected that it will be tight for observations
which consist of function evaluations.
Other results that we are aware of pertain to asymptotic properties of
(n). Ritter (1996) has shown that for an optimal sampling of the input
space, the asymptotics of the generalisation error is O
Hansen (1993) showed that for linear regression models it is possible to average over
the distribution of the training sets.
a random process which obeys to the Sacks-Ylvisaker 3 conditions of order s
(see Ritter et al., 1995 for more details on Sacks-Ylvisaker conditions). In
general, the Sacks-Ylvisaker order of the MB k covariance function is
1. For example an MB 1 process has hence the generalisation error
shows a n \Gamma1=2 asymptotic decay. In the case that X ae R, the asymptotically
optimal design of the input space is the uniform grid.
Silverman (1985) proved a similar result for random designs. Haussler
and Opper (1997) have developed general (asymptotic) bounds for the expected
log-likelihood of a test point after seeing n training points.
In the following we introduce upper and lower bounds on the learning
curve of a GP in a non-asymptotic regime. An upper bound is particularly
useful in practice as it provides an (over)estimate of the number of
examples needed to give a certain level of performance. A lower bound is
similarly important because it contributes to fix the limit which can not be
outperformed by the model.
The bounds presented are derived from two different approaches. The
first approach makes use of the particular form assumed by the generalisation
error at x (E g
(x)). As the error bar generated by one data point
is greater than that generated by n data points, the former can be considered
as an upper bound of the latter. Since this observation holds for the variance
due to each one the data points, the envelope of the surfaces generated by
Loosely speaking, a stochastic process possessing s mean-square derivatives but not
is said to satisfy the Sacks-Ylvisaker conditions of order s.
the variances due to each data point is also an upper bound of oe 2
n (x). In
particular as oe 2
Dn (x) (cf. Equation 5), the envelope is an upper
bound of the generalisation error of the GP. Following this argument, we
can assert that an upper bound on E g
Dn (x) is the one generated by every
GP trained with a subset of D n . The larger the subset of D n the tighter the
bound.
The two upper bounds we present differ in the number of training points
considered in the evaluation of the covariance: the derivation of the one-point
upper bound E u
1 (n) and the two-point upper bound E u
2 (n) are presented
in Section 5.1 and Section 5.2 respectively. Section 5.3 reports the
asymptotic expansion of E u
1 (n) in terms of - and oe 2
- .
The second approach is based on the expansion of the stochastic process
in terms of the eigenfunctions of the covariance function. Within this
framework, Opper proposed bounds on the training and generalisation error
(Opper and Vivarelli, 1999) in terms of the eigenvalues of C p (x; x 0 ); the
lower bound E l (n) obtained is presented in Section 5.4.
In order to have tractable analytical expressions, all the bounds have
been derived by introducing three assumptions:
i The input space X is restricted to the interval [0; 1];
ii The probability density distribution of the input points is uniform:
iii The prior covariance function C p (x; x 0 ) is stationary.
5.1 The one-point upper bound E u
For the derivation of the one-point upper bound, let us consider the error
bar generated by one data point x i . Since C
Equation 2 becomes
For x far away from the training point x i , oe 2
the confidence on
the prediction for a test point lying far apart from the data point x i is quite
low as the error bar is large. The closer x to x i , the smaller the error bar on
y (x). When
Irrespective
of the value of C p (0), r varies from 0 to 1. As normally C p (0) AE oe 2
and thus oe 2
- . So far we have not used any hypothesis concerning
the dimension of the variable x, thus this observation holds regardless the
dimension of the input space.
The effect of just one data point helps in introducing the first upper
bound. The interval [0; 1] is split up in n subintervals
\Theta
a
(where a
=2 and b
centred around the i-th
data point x i , with a
Let us consider the i-th training point and the error bar oe 2
by x i . When x 2
\Theta
a
1 this relation is illustrated in Figure
3, where the envelope of the surfaces of the errors due to each datapoint
(denoted by E g
(x)) is an upper bound of the overall generalisation error.
Since we are dealing with positive functions, an upper bound of the expected
generalisation error on the interval
\Theta
a
can be written as
a i
a i
where p (x) is the distribution of the test points. Summing up the contributions
coming from each training datapoint in both sides of Equation 8 and
setting
a i
a i
The interval where the contribution of the variance due to x i contributes to
Equation 8 is also shown in Figure 3.
Under the assumption of the stationarity of the covariance function,
integrals such as those in the right hand side of Equation 9 depend only
upon differences of adjacent training points (i.e. x
The right hand side of Equation 9 can be rewritten as
a i
a i
dx
I
I
where
I
Equation 11 can be derived changing the variables in the two integrals of
Equation Equation 11 is
an upper bound on E g
and still depends upon the choice of the training
data D n through the interval of integration. We note that the arguments
of the integrals I (\Delta) in Equation 11 are the differences between adjacent
training points. Denoting those differences with , we can model
their probability density distribution by using the theory of order statistics
(David, 1970). Given an uniform distribution of n training data over the
interval [0; 1], the density distribution of the differences between adjacent
points is p . Since this is true for all the differences ! i
we can omit the superscript i and thus the expectation of the integrals in
Equation 11 over p (!) is
I
I
I (! n )
. Both the integrals
can be calculated following a similar procedure. Let us consider
where the second line has been obtained integrating by parts. The last line
follows from the fact that [I (!)
We are now able to write an upper bound on the learning curve as
The calculations of the integrals in the above expression are straightforward
though they involve the evaluation of hyper-geometric functions (because of
the As the evaluation of such functions is computationally
intensive, we found preferable to evaluate Equation 14 numerically.
5.2 The two-points upper bound E u(n)
The second bound we introduce is the natural extension of the previous idea,
using two data points rather than one. By construction, we expect that it
will be tighter than the one introduced in Section 5.1.
Let us consider two adjacent data points x i and x i+1 of the interval [0; 1],
with By the same argument presented in the previous section,
the following inequality holds:
2 (x) is the variance on the prediction -
y (x) generated by the data
points x i and x i+1 . Similarly to Equation 9, summing up the contributions
of both sides of Equation 15 we get an upper bound on the generalisation
error:
where we have defined
After some calculations (see Appendix A) we obtain
where
I 1
(!). The calculation of the integrals with respect to !
in E u
2 (n) is complicated by the determinant \Delta (!) in the denominator and by
the distribution n so we preferred to evaluate them numerically
as we did for E u
5.3 Asymptotics of the upper bounds
From Equation 14, an expansion of E u
1 (n) in terms of - and oe 2
- in the limit
of a large amount of training data can be obtained. The expansion depends
upon the covariance function we are dealing with. Expanding the covariance
function around 0, the asymptotic form of E u
1 (n) for MB 1 is
n-
whereas for the functions MB 2 , MB 3 and SE it is
The asymptotic value of E u
depends neither on the lengthscale of
the process nor on the order of the covariance function MB k for k - 1 but
is a function of the ratio r:
lim
As we pointed out in Section 5.1, this is the minimum generalisation error
achievable by a GP when it is trained with just one datapoint. The n !1
scenario corresponds to the situation in which every test point is close to
a datapoints. As mentioned at the beginning of this Section, the asymptotics
of the learning curve for the MB k and SE covariance functions are
O
\Delta and O
respectively. Although the expansions of
decay asymptotically faster than the learning curves, they reach an
asymptotic plateau oe 2
- . We also note that the asymptotic values
get closer to the true noise level when r - 1, i.e. for the unrealistic
case oe 2
The smoothness of the process enters into the asymptotics through a
factor O
This factor affects the rate of approach to the asymptotic value oe 2
of E u
1 (n). We notice that larger lengthscales and noise levels increase the
rate of decay of E u
1 (n) to the asymptotic plateau.
The asymptotic form of E u
2 (n) for the MB 1 , MB 2 , MB 3 and SE covariance
functions is (Vivarelli, 1998)
a
where the value of a depends upon the choice of the covariance function and
(0). Similarly to the expansion of E u
1 (n), the decay rate of
2 (n) is faster than the asymptotic decay of the actual learning curves but
it reaches an asymptotic plateau of
lim
It is straightforward to verify that the asymptotic plateau of E u
2 (n) is lower
than the one of E u
1 (n) and that it corresponds to the error bar estimated
by a GP with two observations located at the test point.
5.4 The lower bound E l (n)
Opper (Opper and Vivarelli, 1999) proposed a bound on the learning curve
and on the training error based on the decomposition of the stochastic process
y (x) in terms of the eigenfunctions of the covariance C p (x; x 0 ).
Denoting with ' k set of functions satisfying
the integral equation
Z
the Bayesian generalisation error E
(where
y (x) is the true underlying stochastic function and - y (x) is the GP predic-
tion) can be written in terms of the eigenvalues of C p (x; x 0 ). In particular,
after an average over the distribution of the input data, E g (D n ) can be
written as E g (D n
, where is the infinite dimension
diagonal matrix of the eigenvalues and V is a matrix depending on
the training data, i.e. V
By using Jensen's inequality, it is possible to show that a lower bound of
the learning curve and an upper bound of the training error is (Opper and
In this paper we mean to compare this lower bound to the actual learning
curve of a GP. As our bounds are on t rather than y, we must add oe 2
- to the
expression obtained in Equation 23 giving an actual lower bound of
6 Results
As we pointed out in Section 4, the analytic calculation of the learning curve
of a GP is infeasible. Since the generalisation error
is a complicated function of the training data (which are inside the elements
of k (x) and K \Gamma1 ), it is problematic to perform an integration over the
distribution of the training points. For comparing the learning curve of
the GP with the bounds we found, we need to evaluate the expectation
of the integral in Equation 25 over the distribution of the data: E
EDn
\Theta
Dn
. An estimate of E g (n) can be obtained using a Monte Carlo
approximation of the expectation. We used 50 generations of training data,
sampling uniformly the input space [0; 1]. For each generation, the expected
generalisation error for a GP has been evaluated using up to 1000 datapoints.
Using the 50 generations of training data, we can obtain an estimate of the
learning curve E g (n) and its 95% confidence interval.
Since this study is focused on the behaviour of bounds on learning
curve on GP, we assume the true values of the parameters of the GP are
known. So we chose the value of the constant - for the covariance functions
Equation 4) such that C p
allowed the lengthscale - and the noise level oe 2
- to assume several values
To begin with, we study how the smoothness of a process affects the
behaviour of the learning curve. The empirical learning curves of Figure
4 have been obtained for processes whose covariance functions are MB 1 ,
0:1. We can notice that all the
learning curves exhibit an initial linear decrease. This can be explained
considering that without any training data, the generalisation error is the
maximum allowable by the model (C
- ). The introduction
of a training point x 1 creates a hole on the error surface: the volume of
the hole is proportional to the value of the lengthscale and depends on the
covariance function. The addition of a new data point x 2 will have the effect
of generating a new hole in the surface. With such a few data points it is
likely that the two data lie down far apart one from the other, giving rise
to two distinct holes. Thus the effect that a small dataset exerts to pull
down the error surface is proportional to the amount of training points and
explains the initial linear trend.
Concerning the asymptotic behaviour of the learning curves, we have
verified that they agree with the theoretical analysis carried out by Ritter
(1996). In particular, a log-log plot of the learning curves with a MB k
covariance function shows an asymptotic behaviour as O
. A
similar remark applies to the SE covariance function, with an asymptotic
decay rate of O
(Opper, 1997). We have also noted that the
smoother the process described by the covariance function the smaller the
the amount of training data needed to reach the asymptotic regime.
The behaviour of the learning curves is affected also by the value of the
lengthscale of the process and by the noise level and this is illustrated in
Figure
7. The learning curves shown in Figure 5(a) have been obtained for
the MB 1 covariance function setting the noise level oe 2
0:1 and varying the
values of the parameters Intuitively, Figure 5(a) suggests
that decreasing the lengthscale stretches the early behaviour of the learning
curve and the approach to the asymptotic plateau lasts longer; this is due
to the effect induced by different values of the lengthscale which stretch or
compress the input space. We have verified that rescaling the amount of
data n by the ratio of the two lengthscales, the two curves of Figure 5(a)
lay on top of each other.
The variation of the noise level shifts the learning curves from the prior
value C p (0) by an offset equal to the noise level itself (cf. Equation 5);
in order to see any significant effect of the noise on the learning curve,
Figure
5(b) shows a log-log graph of E
obtained for a stochastic
process with MB 3 covariance function, setting
. We can notice two main effects. The noise variance affects
the actual values of the generalisation error since the learning curve obtained
with high noise level is always above the one obtained with a low noise
level. A second effect concerns the amount of data necessary to reach the
asymptotic regime. The learning curve characterised by an high noise level
needs fewer datapoints to attain to the asymptotic regime.
Stochastic processes with different covariance functions and different values
of lengthscales and noise variance behave in a similar way.
In the following we discuss the results in two main subsections: results
about the bounds E u
2 (n) are presented in Section 6.1, whereas
the lower bound of Section 5.4 is shown in Section 6.2. As the results we
obtained for these experiments show common characteristics, we show the
bounds of the learning curve obtained by setting
6.1 The upper bounds E u(n) and E u(n)
Each graph in Figure 6 shows the empirical learning curve with its confidence
interval and the two upper bounds E u
(n). The curves are shown
for the MB 1 , MB 2 , MB 3 and the SE covariance functions.
For a limited amount of training data it is possible to notice that the upper
error bar associated to EDn [E g (n)] lies above the actual upper bounds.
This effect is due to the variability of the generalisation error for small data
sets and suggests that the bounds are quite tight for small n. The effect
disappears for large n, when the estimate of the generalisation error is less
sensitive to the composition of the training set.
As expected, the two-point upper bound E u
2 (n) is tighter than the one-point
upper bound E u
We note that the tightness of the upper bound depends upon the covariance
function, being tighter for rougher processes (such as MB 1 ) and getting
worse for smoother processes. This can be explained by recalling that covariance
functions such as the MB k correspond to Markov processes of order k
(cf. Section 3). Although the Markov process is actually hidden by the presence
of the noise, E g (n) is still more dependent on training data lying close
to the test point x than on more distant points. Since the bounds E u
calculated by using only local information (namely the
closest datapoint to the test point, or the closest datapoints to the left and
right, respectively), it is natural that the more the variance at x depends on
local data points, the tighter the bounds become.
For instance, let us consider MB 1 , the covariance function of a first order
Markov process. For the noise-free process, knowledge of data-points lying
beyond the the left and right neighbours of x does not reduce the generalisation
error at x 4 . Although in the noisy case more distant data-points
4 This is because the process values at the training points and test point form a Markov
chain, and knowledge of the process values to the left and right of the test point "blocks"
reduce the generalisation error (because of the term oe 2
- in the covariance
matrix K), it is likely that local information is still the most important.
The bounds on the learning curves computed for MB 2 and MB 3 confirm
this remark, as they are looser than for MB 1 . For the SE covariance function,
this effect still holds and is actually enlarged.
In Section 5.3 we have shown that the asymptotic behaviour of the bound
depends on the covariance function, being O
O
plots of the upper bounds confirm the
analysis carried out in Section 5.3, where we showed that E u
approach asymptotic plateaux. In particular, E u
tends to oe 2
O
tends to
The quality of the bounds for processes characterised by different length-
scales and different noise levels are comparable to the ones described so far:
the tightness of E u
still depend on the smoothness of the
process. As explained at the beginning of this section, a variation of the
lengthscale has the same effect of a rescaling in the number of training data.
This can be observed explicitly in the asymptotic analysis of Equations
and 19, where the decay rate depends on the factor n-.
For a fixed covariance function, we note that the bounds are tighter for
lower noise variance; this is due to the fact that the lower the noise level the
better the hidden Markov process manifests itself. For smaller noise levels
the influence of more remote observations.
the learning curve becomes closer to the bounds because the generalisation
error relies on the local behaviour of the processes around the test data; on
the contrary, a larger noise level hides the underlying Markov Process thus
loosening the bounds.
6.2 The bound E l (n)
We have also run experiments computing the lower bound we obtained from
Equation 24 for processes generated by the covariance priors MB 1 , MB 2 ,
MB 3 and SE .
Equation 24 shows that the evaluation of E l (n) involves the computation
of an infinite sum of terms; we truncated the series considering only those
terms which add a significant contribution to the sums, i.e. j k =oe 2
" is the machine precision. Since each contribution in the series is positive,
the quantity computed is still a lower bound of the learning curve.
Figure
7 shows the results of the experiment in which we set
0:1. The graphs of the lower bound lies below the empirical learning
curve, being tighter for large amount of data; in particular for the smoothest
processes with large amount of data, the 95% confidence intervals lay below
the actual lower bound.
For the lower bound tends to the noise level oe 2
- . As with the
empirical learning curve, log-log plots of E l
y (n) show an asymptotic decay to
zero as O(n \Gamma(2k\Gamma1)=2k ) and O
\Delta for the MB k and the SE covariance
functions, respectively.
The graphs of Figure 7 show also that the tightness of the bound depends
on the smoothness of the stochastic process; in particular smooth processes
are characterised by a tight lower bound on the learning curve E g (n). This
can be explained by observing that E l (n) is a lower bound on the learning
curve and an upper bound of the training error. The values of smooth
functions do not have large variation between training points and thus the
model can infer better on test data; this reduces the generalisation error
pulling it closer to the training error. Since the two errors sandwich the
bound of Equation 24, E l (n) becomes tight for smooth processes.
We can also notice that the tightness of the lower bound depends on the
noise level, becoming tight for high the noise level and loose for small noise
level. This is consistent with a general characteristic of E l (n) which is monotonically
decreasing function of the noise variance (Opper and Vivarelli,
1999).
In this paper we have presented non-asymptotic upper and lower bounds for
the learning curve of GPs. The theoretical analysis has been carried out for
one-dimensional GPs characterised by several covariance functions and has
been supported by numerical simulations.
Starting from the observation that increasing the amount of training
data never worsens the Bayesian generalisation error, an upper bound on
the learning curve can be estimated as the generalisation error of a GP
trained with a reduced dataset. This means that for a given training set the
envelope of the generalisation errors generated by one and two datapoints
is an upper bound of the actual learning curve of the GP. Since the expectation
of the generalisation error over the distribution of the training data is
not analytically tractable, we introduced the two upper bounds E u
1 (n) and
2 (n) which are amenable to average over the distribution of the test and
training points. In this study we have evaluated the expected value of the
future directions of research should also deal with the evaluation of
the variances.
In order to highlight the behaviour of the bounds with respect to the
smoothness of the stochastic process, we investigated the bounds for the
modified Bessel covariance function of order k (describing stochastic processes
differentiable) and the squared exponential
function (describing processes mean square-differentiable up to the order
1).
The experimental results have shown that the learning curves and their
bounds are characterised by an early, linearly decreasing behaviour; this is
due to the effect exerted by each datapoint in pulling down the surface of
the prior generalisation error. We also noticed that the tightness of the
bounds depends on the smoothness of the stochastic processes. This is due
to the facts that the bounds rely on subsets of the training data (i.e. one
or two datapoints) and the modified Bessel covariance functions describe
Markov processes of order k; although in our simulations the Markovian
processes were hidden by noise, the learning curves depend mainly on local
information and our bounds become tighter for rougher processes.
We also investigated the behaviour of the curves with respect to the
variation of the correlation lengthscale of the process and the variance of
the noise corrupting the stochastic process. We noticed that the lengthscale
stretches the behaviour of the curves effectively rescaling the number of
training data. As the noise level has the effect of hiding the underlying
Markov process, the upper bounds become tighter for smaller noise variance.
The expansion of the bounds in the limit of large amount of data highlights
an asymptotic behaviour depending upon the covariance function;
approaches the asymptotic plateau as O
(for the MB 1 covariance
and as O
for smoother processes; the rate of decay
to the plateau of E u
2 (n) is O
. Numerical simulations supported our
analysis.
One limitation of our analysis is the dimension of the input space; the
bounds have been made analytically tractable by using order statistics results
after splitting up the one dimensional input space of the GP. In higher
dimensional spaces the partition of the input space can be replaced by a
Voronoi tessellation that depends on the data D n but averaging over this
distribution appears to be difficult. One can suggest an approximate evaluation
of the upper bounds by an integration over a ball whose radius depends
upon the number of examples and the volume of the input space in which
the bound holds. In any case we expect that the effect due to larger input
dimension is to loosen the upper bounds. We note that recent work by (Sol-
lich, 1999) has derived some good approximations to the learning curve, and
that his methods apply in more than one dimension 5 .
We also ran some experiments by using the lower bound proposed by
Opper, based on the knowledge of the eigenvalues of the covariance function
of the process. Since the bound E l (n) is also an upper bound on the training
error, we observed that the bound is tighter for smooth processes, when the
learning curve becomes closer to the training error. Also the noise can vary
the tightness of E l (n); a low noise level loosens the lower bound. Unlike the
upper bounds, the lower bound can be applied also in multivariate problems,
as it is easily extended to high dimension input space; however it has been
verified (Opper and Vivarelli, 1999) that the bound becomes less tight in
input space of higher dimension.
Appendix
A: The two-points upper bound E u
In this Appendix we derive Equation 17 starting from Equation 16.
We start by calculating oe 2
(x). As the covariance matrix generated by
two data points is a 2 \Theta 2 matrix, it is straightforward to evaluate oe 2
Considering the two training data x i and x i+1 , the covariance matrix of the
5 The reference to Sollich (1999) was added when the manuscript was revised in April
1999.
GP is
From the evaluation of the determinant of K as
As the covariance vector for the test point x is k
the variance assumes the form
Changing variables in the covariances C p
(as
turns out that the upper bound
generated by oe 2
2 (x) in the interval
\Theta
(when i 6= 0; n), is
I 1
where
I 1
(-) d- and I 2
It is noticeable that, similarly to Equation 11, also the integrals I 1 (\Delta), I 2 (\Delta)
and the determinant \Delta
depend upon the length of the interval
of integration We evaluate the contributions to the upper
bound over the intervals
\Theta 0; x 1
and [x n ; 1] by integrating the variance oe 2
generated by x 1 and x n over
\Theta 0; x 1
and [x n ; 1] respectively. Hence the right
hand side of Equation 16 can be rewritten as
I 1
I
where I (\Delta) is defined in Equation 12.
Equation 26 is still dependent on the distribution of the training data
because it is a function of the distances between adjacent training points
. Similarly to Equation 11, we obtain an upper bound independent of
the training data by integrating Equation 13 over the distribution of the
differences
\GammaC
Acknowledgments
This research forms part of the "Validation and Verification of Neural Net-work
Systems" project funded jointly by EPSRC (GR/K 51792) and British
Aerospace. We thank Dr. Manfred Opper, and Dr. Andy Wright of BAe for
helpful discussions. We also thank the anonymous referees for their comments
which have helped improve this paper. F. V. was supported by a
studentship from British Aerospace.
--R
The Geometry of Random Fields.
A new look at statistical model identification.
Gaussian processes for Bayesian classification via hybrid Monte Carlo.
Order Statistics.
Table of Integrals
Stochastic linear learning: Exact test and training error averages.
Generalized Additive Models.
Mutual information
Information Theory.
Design problems for optimal surface interpolation.
Network information criterion-determining the number of hidden units for artificial neural network models
Bayesian Learning for Neural Networks.
Lecture Notes in Statistics 118.
Regression with gaussian processes: average case per- formance
General bounds on Bayes errors for regression with Gaussian Processes.
An Upper Bound on the Bayesian Error Bars for Generalized Linear Regression.
Evaluation of Gaussian Processes and Other Methods for Non-linear Regression
Almost optimal differentiation using noisy data.
Multivariate integration and approximation for random fields satisfying Sacks- Ylvisaker conditions
Some aspects of the spline smoothing approach to non-parametric regression curve filtering
Learning Curves for Gaussian Processes.
A theory of the learnable.
The Nature of Statistical Learning Theory.
Studies on generalisation in Gaussian processes and Bayesian neural networks.
Prediction and regulation by linear least square methods.
Computing with infinite networks.
Figures 6(a)
Figure 7: Figures 7(a)
--TR
--CTR
Peter Sollich , Anason Halees, Learning curves for Gaussian process regression: approximations and bounds, Neural Computation, v.14 n.6, p.1393-1428, June 2002 | gaussian processes;bounds;generalisation error;bayesian inference |
349155 | Rapid Evaluation of Nonreflecting Boundary Kernels for Time-Domain Wave Propagation. | We present a systematic approach to the computation of exact nonreflecting boundary conditions for the wave equation. In both two and three dimensions, the critical step in our analysis involves convolution with the inverse Laplace transform of the logarithmic derivative of a Hankel function. The main technical result in this paper is that the logarithmic derivative of the Hankel function $H_\nu^{(1)}(z)$ of real order $\nu$ can be approximated in the upper half $z$-plane with relative error $\varepsilon$ by a rational function of degree $d \sim O (\log|\nu|\log\frac{1}{\varepsilon}+ \log^2 |\nu| + | \nu |^{-1} \log^2\frac{1}{\varepsilon} )$ as $|\nu|\rightarrow\infty$, $\varepsilon\rightarrow 0$, with slightly more complicated bounds for $\nu=0$. If N is the number of points used in the discretization of a cylindrical (circular) boundary in two dimensions, then, assuming that $\varepsilon < 1/N$, $O(N \log N\log\frac{1}{\varepsilon})$ work is required at each time step. This is comparable to the work required for the Fourier transform on the boundary. In three dimensions, the cost is proportional to $N^2 \log^2 N + N^2 \log N\log\frac{1}{\varepsilon}$ for a spherical boundary with N2 points, the first term coming from the calculation of a spherical harmonic transform at each time step. In short, nonreflecting boundary conditions can be imposed to any desired accuracy, at a cost dominated by the interior grid work, which scales like N3 in two dimensions and N2 in three dimensions. | argument, z, satisfying Im(z) 0. The number of poles is bounded by O log ||
log 1 +log2 ||+||1 log2 1 . A similar representation for derived which
is valid for Im(z) >0 requiring O log 1 log 1 +log 1 log log 1 +log 1 log log 1
poles.
In section 2, we introduce nonreflecting boundary kernels. In section 3 we collect
background material in a form convenient for the subsequent development. Section 4
contains the analytical and approximate treatment of the logarithmic derivative, while
a procedure for computing these representations is presented in Section 5. The results
of our numerical computations are contained in section 6, and we present our
conclusions in section 7.
2. Nonreflecting boundary kernels. Let us first consider the wave equation
in a two-dimensional annular domain 0 <<1. The general solution can be
expressed as
where Kn and In are modified Bessel functions (see, for example, [17, section 9.6]),
the coecients an and bn are arbitrary functions analytic in the right half-plane, L
denotes the Laplace transform
denotes the inverse Laplace transform
ds.
Likewise, for the wave equation in a three-dimensional domain r0 <r<r1, the
general solution can be expressed as
rs/c rs/c
If we imagine that is to be used as a nonreflecting boundary,
then we can assume there are no sources in the exterior region and the coecients
(or bnm(s)) are zero. Let us now denote by un(, t) the function satisfying
Then
un (, s)=an(s) Kn (s/c)
s Kn (s/c)
c Kn(s/c)
so that
c Kn(s/c)
where denotes Laplace convolution
convolution kernel in (2.9) is a generalized function. Its singular part is easily
removed, however, by subtracting the first two terms of the asymptotic expansion
c Kn(s/c) c 2
From the assumption un(, t)=0fort 0 and standard properties of the Laplace
transform we obtain the boundary condition
where
which we impose at
Remark. The solution to the wave equation in physical space is recovered on the
nonreflecting boundary from un by Fourier transformation:
assuming N points are used in the discretization.
The analogous boundary condition in three dimensions is expressed in terms of
the functions unm(r, t) satisfying
rs/c
After some algebraic manipulation, assuming unm(r, t)=0fort 0, we have
where
(rs/c)which we impose at
Note that the boundary conditions (2.12) and (2.16) are exact but nonlocal, since
they rely on a Fourier (or spherical harmonic) transformation in space and are history
dependent. The form of the history is simple, however, and expressed, for each separate
mode, in terms of a convolution kernel which is the inverse Laplace transform of
a function defined in terms of the logarithmic derivative of a modified Bessel function
d K (z)
log K(z)= .
dz K(z)
Remark. In three dimensions, the required logarithmic derivative of Kn+ 1 (z)isa ratio of polynomials, so that one can recast the boundary condition in terms of a
dierential operator of order n. The resulting expression would be equivalent to those
derived by Sofronov [7] and Grote and Keller [8].
The remainder of this paper is devoted to the approximation of the logarithmic
derivatives (2.18) as a ratio of polynomials of degree O(log ), from which the convolution
kernels n and n can be expressed as a sum of decaying exponentials. This
representation allows for the recursive evaluation of the integral operators in (2.12)
and (2.16), using only O(log n) work per time step (see [18]). We note that, by Par-
sevals equality, the L2 error resulting from convolution with an approximate kernel
is sharply bounded by the L error in the approximation to the kernels transform.
Precisely, approximating the kernel B(t) by the kernel A(t)wefind
A u B
where we assume that A, B, and u are all regular for Re(s) > 0. For finite times we
may let s have a positive real part, :
We therefore concentrate our theoretical developments on L approximations. For
ease of computation, however, we compute our rational representations by least
squares methods. These do generally lead to small relative errors in the maximum
norm, as will be shown.
Since Hankel functions are more commonly used in the special function literature,
we will write the logarithmic derivatives as
d d (1) i/2 H(1) zei/2
log K(z)= log H
dz dz H(1) zei/2
We are, then, interested in approximating logarithmic derivative of the Hankel function
on and above the real axis.
3. Mathematical preliminaries. In this section we collect several well-known
facts concerning the Bessel equation, the logarithmic derivative of the Hankel func-
tion, and pole expansions, in a form that will be useful in the subsequent analytical
development.
3.1. Bessels equation. Bessels dierential equation
dz2 z dz z2
(1) (2)
for R, has linearly independent solutions H and H , known as Hankels
functions. These can be expressed by the formulae
J(z) eiJ(z) J(z) eiJ(z)
where the Bessel function of the first kind is defined by
z (z2/4)k
The expressions in (3.2) are replaced by their limiting values for integer values of .
(1) (2)
(See, for example, [17, section 9.1].) For general , the functions H and H have
a branch point at and it is customary to place the corresponding branch cut
on the negative real axis and impose the restriction <arg z . We shall find
it more convenient, however, to place the branch cut on the negative imaginary axis,
with the restriction
Hankels functions have especially simple asymptotic properties. In particular (see,
for example, [19, section 7.4.1]),
z zk
z zk 2z z
as z , with
(42 12)(42 32) (42 (2k 1)2)
and the branch of the square root is determined by
Finally we note the symmetry
We also make use of the modified Bessel functions K(z) and I(z). These are linearly
independent solutions of the equation obtained from (3.1) by the transformation
z iz. Their Wronskian satisfies
Moreover we have for positive r [20]
Asymptotic expansions of K(r) and I(r) for r small and large are also known [17,
sections 9.6 and 9.7]. For real r and 0wehave
r
log ,=0,
Here =0.5772 . is the Euler constant.
Finally, we note the uniform expansions of Bessel functions for given in
[17]. For Hankel function and derivative we have
as , where we restrict z to |arg(z)|/2 and define
3 z
Here, Ai(t) denotes the Airy function [17, section 10.4]. Note that
Large approximations of the modified Bessel functions for real arguments, r, are
given by
where
r
3.2. Hankel function logarithmic derivative. We denote the logarithmic
(1)
derivative of H by G,
(1)
(3.21) G(z)= log H(1)(z)= .
dz H(1)(z)
The following lemma states a few fundamental facts about G that we will use below.
Lemma 3.1. The function G(z), for R, satisfies the formulae
z is the complex conjugate of z. Asymptotic approximations to
G are
where is the Euler constant,
zk 2z z zk
(1)
Fig. 3.1. Curve z() defined by (3.18) near which the scaled zeros of H lie (see Lemma 3.2).
(1)
The branch cut of H is chosen (3.4) on the negative imaginary axis.
where Ak() is defined in (3.7), and
where is defined in (3.18). Furthermore, the function u defined by
satisfies the recurrence
Proof. Equations (3.22) and (3.23) and asymptotic expansion (3.24) follow im-
(1)
mediately from the definitions (3.2) through (3.4) of J and H . The asymptotic
expansion (3.25) follows from (3.5) and (3.6), while (3.26) is a consequence of (3.16)
and (3.17). The recurrence (3.28) from standard Bessel recurrences [17, section
9.1.27].
(1)
The zeros of H (z) are well characterized [17, 20]; they lie in the lower half z-plane
near the curve shown in Figure 3.1, obtained by transformation [21] of Bessels
equation. In terms of the asymptotic approximation (3.16), this curve corresponds to
negative, real arguments of the Airy function.
(1)
Lemma 3.2. The zeros h,1,h,2,. of H (z) in the sector /2 arg z 0
are given by the asymptotic expansion
uniformly in n, where n is defined by the equation
z() is obtained from inverting (3.18), and an is the nth negative zero of Airy function
Ai. The zeros in the sector arg z 3/2 are given by h,1, h,2,.In
particular,
where
3.3. Pole expansions. A set of poles in a finite region defines a function that
is smooth away from the region, with the smoothness increasing as the distance in-
creases. This fact leads to the following approximation related to the fast multipole
method [22, 23].
Lemma 3.3. Suppose that q1,.,qn are complex numbers and z1,.,zn are
complex numbers with |zj|1 for j =1,.,n. The function
z zj
can be approximated for Re(z)=a>1 by the m pole expansion
z j
is a root of unity and j is defined by
The error of the approximation is bounded by
where
z zj
Proof. We use the geometric series summation
z v zk+1 zm z v
to obtain
zm z zj z j
All m terms of the first summation vanish, due to the combination of (3.34) and the
equality mk0. For the error term we obtain
Re
|z| (a 1)2 1+a2 (a 1)2 |z| 1 zj/z
(a 1)2 z zj (a 1)2
and
Moreover, repeating the computations of (3.39), we find
(a 1)2
Now the combination of (3.38) through (3.41) and the triangle inequality gives
(3.35).
Inequality (3.35) remains valid if we assume instead that |zj|b and Re(z)=
ab>b, for arbitrary b R, b>0; this fact leads to the next two results whose proofs
mimic that of Lemma 3.3 and are omitted.
Lemma 3.4. Suppose n, p are positive integers, q1,.,qn are complex numbers,
and z1,.,zn are complex numbers contained in disks D1,.,Dp of radii r1,.,rp,
centered at c1,.,cp, respectively. The function
z zj
can be approximated for z satisfying Re(z ci) ari >ri for i =1,.,pby the mp
pole expansion
ri j),
where ij is defined by
ri
with jiDj. The error of the approximation is bounded by
(am 1)(a 1)2
1148 BRADLEY ALPERT, LESLIE GREENGARD, AND THOMAS HAGSTROM
where
z zj
Lemma 3.5. Suppose that the discrete poles of Lemma 3.4 are replaced with a
density q defined on a curve C with C specifically
C z
which is finite for z outside Up, and that gm is defined by (3.43) with ij defined by
ri
with Then the bound (3.45) holds as before. Lemma 3.3 enables us
to approximate, with exponential convergence, a function defined as a sum of poles.
The fundamental assumption is that the region of interest be separated from the
pole locations. The notion of separation is eectively relaxed by covering the pole
locations with disks of varying size in an adaptive manner. In Lemmas 3.4 and 3.5,
we use this approach to derive our principal analytical result.
4. Rational approximation of the logarithmic derivative. The Hankel
functions logarithmic derivative G(z) defined in (3.21) approaches a constant as
z and is regular for finite z C, except at z = 0, which is a branch point, and at
(1)
the zeros of H (z), all of which are simple. We can therefore develop a representation
for G analogous to that of the Mittag-Leer theorem; the only addition is due to
the branch cut on the negative imaginary axis. It will be convenient to work with
u(z), defined in (3.27), for which approximations to be introduced have simple error
bounds.
Theorem 4.1. The function u(z)=zG(z), where G is defined for R by
(3.21) with the branch cut defined by (3.4), is given by the formula
for z C not in {0,h,1,h,2,.,h,N } and not on the negative imaginary axis.
(1)
Here h,1,h,2,.,h,N denote the zeros of H (z), which number N.
Proof. The case of the spherical Hankel function, where
is simple and we consider it first. Here u(z) is a ratio of polynomials in iz with real
coecients, which is clear from the observation that u1/2(z)=iz1/2 in combination
with the recurrence (3.28). Hence
z h,n
where p is a polynomial and ,n is the residue of u at h,n,
zh,n
(1)
Cm
(2)
Cm
Re(z)
Fig. 4.1. Integration contour Cm, with inner circle radius 1/m and outer radius m +1.
by lHopitals rule. We see from (3.25) .Noting that u(iy) R for y R, and combining (4.2), (4.3), and (4.5), we obtain
(4.1).
We now consider the case Z, for which the origin is a branch point.
For m =1, 2,., we define Cm to be the simple closed curve, shown in Figure 4.1,
(1)
which proceeds counterclockwise along the circle Cm of radius centered at the
origin from arg z = /2to3/2, to the vertical segment
(2)
to the circle Cm of radius 1/m centered at the origin from arg z =3/2to/2, to
the vertical segment z = rei/2, back to the first circle. Since none of the zeros of
(1)
H lies on the imaginary axis, Cm encloses them all if m is suciently large. For
(1)
such m, and z C inside Cm with H the residue theorem gives
(1) (2)
We now consider the separate pieces of the contour Cm. For the circles Cm and Cm ,
we use the asymptotic expansion (4.4) about infinity and (3.24) about the origin to
obtain
Fig. 4.2. Plot of Re u(rei/2) , containing the zero crossing, and Im u(rei/2) , for
Now exploiting the symmetry u(re3i/2)=u(rei/2) from (3.23) for the vertical
segments, we obtain
which, when combined with (4.6), yields (4.1) and the theorem.
The primary aim of this paper is to reduce the summation and integral of (4.1)
to a similar summation involving dramatically fewer terms. To do so, we restrict z
to the upper half-plane and settle for an approximation. Such a representation is
(1)
possible, for the poles of u (zeros of H ) lie entirely in the lower half-plane and do
not cluster near the real axis. We first examine the behavior of u on the negative
imaginary axis.
The qualitative behavior of u on the branch cut is illustrated by the case of
shown in Figure 4.2. The plot changes little with changing , except for the
sign of Im u(z) and the sharpness of its extremum.
Lemma 4.2. For R, Z, the function u(rei/2) is infinitely
dierentiable on r (0, ) and has imaginary part satisfying the following formulae:
cos()
r2||,=0,
where is defined in (3.20).
(1)
Proof. Infinite dierentiability of u(z) follows from the observation that H (z)
on the negative imaginary axis. To derive (4.9) we recall (3.11) to obtain
then apply (3.10). The remaining formulas follow from the asymptotic forms of K(r)
and I(r) for small and large r, and the uniform large expansions given in (3.12)
through (3.15) and (3.19). Here we use the symmetry u. Note that (4.10) is
valid for r/||0. The approximation (4.12) is nonuniform for 2k 1/2 and
Lemma 4.3. Given 0 > 0 there exist constants c0 and c1 such that for all R,
Z, and all z satisfying Im(z) 0, the function
satisfies the bounds
Moreover, there exists >0 such that for all R, ||0, and with 0 < <1/2,
f(z) admits an approximation g(z) that is a sum of d 1+||1 log(1/) log(1/)
poles, with
Proof. We assume and begin by changing variables,
so that
From the nonvanishing of z and its asymptotic behavior in w, it is clear that (4.15)
holds for ||(0,1) and any fixed 1 >0. Using (4.12) for || large but bounded
away from 2k 1/2 for integral k, an application of Watsons lemma to (4.14) focuses
on the unique positive zero, w,of defined in (3.20). As the derivative of this
function is positive, we conclude
cos()
where is a function of w, so that (4.15) clearly holds. However, as 2k 1/2,
the denominator on the right-hand side of (4.12) may nearly vanish at w and the
expansion loses its uniformity. Setting cos()= in these cases, we see that the
denominator has a minimum which is bounded below by O(2). Hence in an O(||1)
neighborhood of the minimum which includes w,wehave
which by the change of variables is seen to satisfy the upper bound in
uniformly in . As the rest of the integral is small, the upper bound holds.
We now move on to the approximation. For a positive integer m and a positive
number w0, we define intervals
and
where f0,f1, and f2 are defined by the formulae
We will now choose w0 and m so that f0 and f2 can be ignored and then use Lemma
3.5 to approximate f1. Using (4.10) and (4.12) and taking w0 suciently small we
have, for some constant c2 independent of ,
(4.22) |f0(z)| w2||1dw w0 .
Hence, a choice of
suces to guarantee
|f0(z)| |f(z)|in the closed upper half-plane. Now using (4.11) and (4.12) and assuming m su-
ciently large we have, for some constant c3 independent of ,
From (4.23), choosing
for appropriate m0 and m1 independent of and leads to
|f(z)|.Finally, we apply Lemma 3.5 to the approximation of f1. The error involves the
function but we note that Using p poles for
each j we produce a p m-pole approximation g(z) with an error estimate, again for
Im(z) 0, given by(4.28) |f1(z) g(z)| |f1(z)|.
A choice of
enforces
combining (4.24), (4.27), (4.30), and the triangle inequality, we obtain (4.16) with
the number of poles, satisfying the stated bound.
The case treatment. First, the direct application of the
preceding arguments leads to a significantly larger upper bound on the number of
poles. Second, we note that so that relative error bounds near z =0
require a vanishing absolute error. Finally, the lack of regularity of u0(z)atz =0
precludes uniform rational approximation, as discussed in [10]. Therefore, we relax the
condition Im(z) 0toIm(z) >0. By (2.20) this will lead to good approximate
convolutions for times T 1.
Lemma 4.4. There exists >0 such that for all , 0 <<1/2 and , 0 <<
1/2, the function f(z)=u0(z)iz +1/2 admits an approximation g(z) that is a sum
of d log(1/) log log(1/) log(1/) poles, with
Proof. Note that since u0(z) has no poles, f(z) is given by (4.14) and satisfies
(4.15). Define intervals
Now
where f1 and f2 are defined by the formulae
We will now choose m so that f2 can be ignored and then use Lemma 3.5 to approximate
f1. Using (4.11) and assuming m suciently large we have, for some constant c,
Hence, choosing
log log(1/)
for appropriate m0 independent of and leads to
Finally, we apply Lemma 3.5 to the approximation of f1. Using p poles for each j we
produce a p m-pole approximation g(z) with an error estimate for Im(z) given
A choice of
enforces
(4.39), and the triangle inequality, (4.31) is achieved with the number of
poles, satisfying the stated bound.
We now consider the contribution of the poles.
Lemma 4.5. There exist constants C0, C1, >0 such that for all , R with
2 || and 0 <<1/2 the function
z h,n
(1)
where h,1,.,h,N are the roots of H , satisfies the inequalities
and admits an approximation g(z) that is a sum of d log ||log(1/) poles, with
Proof. The curve C defined in Lemma 3.2, near which h,1/||,.,h,N /|| lie,
is contained in disks separated from the real axis. If we denote the disk of radius r
centered at c by D(r, c), then the disks
for example, contain C\{+1, 1}. From (3.31), the root h,1 closest to the real axis
satisfies
hence it is contained in a disk of (4.43) with n log2 24/331/2(a1)1||2/3 , and
all of the roots are contained in O(log ||)of the disks. Now applying Lemma 3.4 we
obtain (4.42) with |h| replaced by To obtain the upper
bound in (4.41) for both h and H we note first that it is trivial except for |z/|1.
A detailed analysis of the roots as described by Lemma 3.2 shows that
Hence, for |z/|1,
z h,j
The lower bound in (4.41) is again obvious except for |z/|1. Then, however, we
note that
Since, from (3.26), by (4.15) the
right-hand side is dominated by iz and
The combination of Theorem 4.1 and Lemmas 4.3 and 4.5 suces to prove our
principal analytical result.
Theorem 4.6. Given 0 > 0 there exists >0 such that for all R, ||0,
and 0 <<1/2 there exists d with
and complex numbers 1,.,d and 1,.,d, depending on and , such that the
function
approximates u(z) with the bound
provided that Im(z) 0. Furthermore
Proof. We first note the lower bound
c||
(4.52) u(z) iz +1/2 .
For >0 the function is nonvanishing and has the correct asymptotic behavior, so
we need only consider the case of || large. The result then follows from (3.26). This
proves (4.51) and (4.50) with u replaced by u iz +1/2 on the right-hand side.
From (3.26) we have
so that the final result follows from the scaling ||1/3.
The number of poles in (4.48) required to approximate u(z) to a tolerance
depends on both and . The asymptotic dependence on is proportional to
||1 log2(1/). We will see in the numerical examples, however, that this term is important
only for small ||; otherwise the dominant term is the first, for an asymptotic
dependence of O log ||log(1/) . As we generally have ||1 in practice, the
term log2 || is of less importance.
Similarly, Lemma 4.4 leads to the following theorem for =0.
Theorem 4.7. There exists >0 such that for all , 0 <<1/2 and ,
0 <<1/2 there exists d log(1/) log(1/) log log(1/)
and complex numbers 1,.,d and 1,.,d, depending on and , such that the
function
approximates u0(z) with the bound
provided that Im(z) . Furthermore
Proof. Again we already have (4.55) with u0(z)iz +1/2 on the right-hand side.
By (3.24) we find
log(1/)u0(z).
The theorem follows from the scaling log1(1/).
As we must take we see that the number of poles required may grow
like log(1/) log T log T log log T. However, this is only for the mode in the
two-dimensionsal case. In short, the T dependence is insignificant in practice.
5. Computation of the rational representations. Analytical error bound
estimates developed in the previous sections are based on maximum norm errors
as in (2.19) and (2.20). In numerical computation it is often convenient, however, to
obtain least squares solutions. Our method of computing a rational function U, that
satisfies (4.50) is to enforce (4.51). An alternative approach would be to use rational
Chebyshev approximation as developed by Trefethen and Gutknecht [24, 25, 26].
In the numerical computations, we work with
and its sum-of-poles approximation U,(z)=U,(z) iz +1/2. In particular, we
have the nonlinear least squares problem
for P,Q polynomials with deg(P) Problem (5.2) is not only
nonlinear, but also very poorly conditioned when P,Q are represented in terms of
their monomial coecients. We apply two tactics for coping with these diculties:
linearization and orthogonalization.
We linearize the problem by starting with a good estimate of Q and updating
P,Q iteratively. In particular, we solve the linear least squares problem
where the integral is replaced by a quadrature. The initial values P(0), Q(0) are
obtained by exploiting the asymptotic expansion (3.25) and the recurrence (3.28).
We find that two to three iterations of (5.3) generally suce.
The quadrature for (5.3) is derived by first changing variables,
where 1,.,m and w1 .,wm denote appropriate quadrature nodes and weights.
The transformed integrand is periodic on the interval [/2,/2], so the trapezoidal
rule (or midpoint rule) is an obvious candidate. The integrand is infinitely continously
dierentiable, except at its regularity is of order 2||.For|| > 8 (say),
the trapezoidal rule delivers at least 16th-order convergence and is very eective.
For small ||, however, a quadrature that adjusts for the complicated singularity at
needed. Here we can successively subdivide the interval near the singularity,
applying high-order quadratures on each subinterval (see, for example, [27]).
The quadrature discretization of (5.3) cannot be solved as a least squares problem
by standard techniques, due to its extremely poor conditioning. We avoid forming the
corresponding matrix; rather we solve the least squares problem by Gram-Schmidt
orthogonalization. The 2d
are orthogonalized under the real inner product
Ref(x)g(x)
to obtain the orthogonal functions
u(x),n=1,
where
Now
Table
Number d of poles to represent the Laplace transform of nonreflecting boundary kernels n and
n, for various values of .
=106
52- 86 11
52- 86 11
729-1024
15-
19- 22
26 17
ndnd46- 54 21
119- 145 26
178- 216 28
266- 324
487- 595 33
596- 728 34
19-
26-
37- 44 20
45- 53 21
119- 144 26
177- 216 28
265- 324
486- 594 33
595- 727 34
891-1024 36
=108
21- 28 11
59- 84 14
419- 638 19
20- 28 11
58-
419- 637 19
972-1024 21
ndnd
ndnd
so P(i+1) and Q(i+1) are computed from the recurrence coecients cnj by splitting
into even- and odd-numbered parts.
For some applications, including nonreflecting boundary kernels, it is convenient
to represent P/Q as a sum of poles,
P(z) d n
Q(z) z n
We compute 1,.,d (zeros of Q) by Newton iteration with zero suppression (see,
Table
Laplace transform of cylinder kernel n defined in (2.13), approximated as a sum of d poles,
for n =1,.,4 and =106.
nd
Pole Coecient
Re Im
Pole Location
Re Im 0.426478E 02
for example, [28]) by the formula
where 1,.,n1 are the previously computed zeros of Q. Then 1,.,d are
computed by the formula P(n)/Q(n). The derivative Q(z) is obtained by
dierentiating the recurrence (5.7).
6. Numerical results. We have implemented the algorithm described in section
5 to compute the representations of n and n through their Laplace transforms.
Recall that for the cylinder kernels, n, we have while for the sphere kernels, n,
we have presents the sizes of the representations for =106,
108, and 1015 in (4.51). For the cylinder kernels, which are aected by the branch
cut, the number of poles for small n is higher than for the sphere kernels. This dis-
crepancy, however, rapidly vanishes as n increases and the asymptotic performance
ensues. The log(1/) dependence of the number of poles for
For =108 we have also computed the maximum norm relative errors which
appear in (2.19) by sampling on a fine mesh. For the cylinder kernel with n =0,
we expect an O(1) error in a small interval about the origin due to (4.10). However,
errors of less than are achieved for |s| > 5 107. This implies a similar accuracy
in the approximation of the convolution for times of order 106. For all other cases the
maximum norm relative errors are of order .
Finally, Table 2 presents poles and coecients for the cylinder kernels for
1,.,4 and =106 to allow comparison by a reader interested in repeating our cal-
culations. Note that the pole locations are written in terms of Extensive tables
will be made available on the Web at http://math.nist.gov/mcsd/Sta/BAlpert.
Remark. Our approximate representation of the nonreflecting boundary kernel
could be used to reduce the cost of the method introduced by Grote and Keller [8]. The
dierential operators of degree n obtained in their derivation need only be replaced by
the corresponding dierential operators of degree log n for any specified accuracy. It
is interesting to note that in the two-dimensional case, where the approach of [8] does
not apply, the analysis described above can be used to derive an integrodierential
formulation in the same spirit.
7.
Summary
. In this paper we have introduced new representations for the
logarithmic derivative of a Hankel function of real order, that scale in size as the
logarithm of the order. An algorithm to compute the representations was presented
and our numerical results demonstrate that the new representations are modest in
size for orders and accuracies likely to be of practical interest.
The present motivation for this work is the numerical modeling of nonreflect-
ing boundaries for the wave equation, discussed briefly here and in more detail in
[18]. Maxwells equations are also susceptible to similar treatment as outlined in [29].
The new representations enable the application of the exact nonreflecting boundary
conditions, which are global in space and time, to be computationally eective.
8.
Appendix
Stability of exact and approximate conditions. In this ap-
pendix, we consider the stability of our approach to the design of nonreflecting boundary
conditions. Given that we are approximating the exact conditions uniformly, it
is natural to expect that our approximations possess similar stability characteristics.
This is, indeed, the case. Oddly enough, however, the exact boundary conditions
themselves do not satisfy the uniform Kreiss-Lopatinski conditions which are necessary
and sucient for strong well-posedness in the usual sense [30]. This may seem
paradoxical since the unbounded domain problem itself is strongly well-posed. The
diculty is that the exact reduction of an unbounded domain problem to a bounded
domain problem gives rise to forcings (inhomogeneous boundary terms) which live in
a restricted subspace. The Kreiss-Lopatinski conditions, on the other hand, require
bounds for arbitrary forcings. In that setting, our best estimates result in the loss of
1/3 of a derivative in terms of Sobolev norms. In practice we doubt that this fact is
of any significance, and have certainly encountered no stability problems in our long
time numerical simulations.
To fill in some of the details, consider a spherical domain of radius one, within
which the homogeneous wave equation with homogeneous initial data is satisfied. At
the boundary we have
unm skn
r
for the exact condition and is uniformly small when we use our approx-
imations. Here gnm is the spherical harmonic transform of an arbitrary forcing g.
Following Sakamoto, we seek to estimate
where
while 0, denotes the usual L2 norm. On the boundary, , we will make use
of fractional Sobolev norms, most easily defined in terms of the spherical harmonic
coecients:
Strong well-posedness would follow from showing that
g(,t)20,dt.Instead, we can show that
dt.
1/3,To prove this, let .Bounded solutions within the sphere are given by
Precisely, setting
we find
where
(1)
We now estimate norms of the solution. First note that the products in the definition
(1) (1)
of n, J(z)H (z), zJ(z)H (z), are uniformly bounded for Im(z) 0. (See the
limits z 0, z , and .) Therefore, as mentioned above, the error term,
so long as its small, has no eect on the estimates we derive, and we simply ignore
it. That is, we set n =1.
We concentrate on the boundary terms in H, as they are both the most straight-forward
to compute and the most ill behaved. In transform space we have
(Here and throughout, c will denote a positive constant independent of all variables.)
We first note that as the only singularities of Bessel functions occur at zero and
infinity, we need only consider the limits z 0, z , and . The first two
are straightforward:
For large we use the uniform asymptotic expansions of Bessel functions due to Olver
which yield
z
From Parsevals relation, we conclude that
g(,t)1/3,dt.
The estimation of the spatial integrals is more involved, as for r<1 the solution has
two transition zones, z and rz , and there are a number of cases to consider.
However, the estimates follow along the same lines and lead to the same result.
It is interesting to note that the loss-of-derivative phenomenon is suppressed when
one looks at the error due to the approximation of the boundary condition. In that
case the transform of the exact solution near the boundary is
(1)
hn (rz)
(1)
hn (z)
so that the error, e, satisfies the problem above with gnm given by
(1)
zhn (z)
(1)
hn (z)
Now the best estimate of n takes the form
which, in combination with (8.6), would lead to an estimate of the 1-norms of the
error in terms of the 4/3-norms of the solution. However, using again the large
asymptotics, a direct calculation shows
Thus n is smaller than its maximum by O(1/3) in the transition region where
O(1/3). Hence we find for the error
In other words, the 1-norms of the error are controlled by the 1-norms of the solution.
We have, of course, ignored discretization error, which could conceivably cause
diculties in association with the lack of strong well-posedness. To rule them out
would require a more detailed analysis. In practice we have encountered no diculties,
even for very long time simulations. We should also note that strong well-posedness
could be artificially recovered by perturbing the approximate conditions for large n.
allowing high accuracy to be maintained for smooth solutions. Finally, we note that
a similar analysis can be carried out in two dimensions.
--R
On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions
On high-order radiation boundary conditions
Artificial boundary conditions of absolute transparency for two- and three-dimensional external time-dependent scattering problems
Fast discrete polynomial transforms with applications to data analysis for distance transitive graphs
A fast transform for spherical harmonics
Modulus and phase of the reduced logarithmic derivative of the Hankel function
Handbook of Mathematical Functions
Nonreflecting Boundary Conditions for the Time-Dependent Wave Equation
Asymptotics and Special Functions
The asymptotic expansion of bessel functions of large order
The asymptotic solution of linear di
THOMAS HAGSTROM
A fast algorithm for particle simulations
An implemenation of the fast multipole method without multipoles
Rational Chebyshev approximation on the unit disk
Real and complex Chebyshev approximation on the unit disk and interval
Rational Carath
On the Numerical Solution of One-Dimensional Integral and Dierential Equa- tions
Introduction to Numerical Analysis
Accurate boundary treatments for Maxwell
Hyperbolic Boundary Value Problems
--TR
--CTR
Laurence Halpern , Olivier Lafitte, Dirichlet to Neumann map for domains with corners and approximate boundary conditions, Journal of Computational and Applied Mathematics, v.204 n.2, p.505-514, July, 2007
Marcus J. Grote , Christoph Kirsch, Nonreflecting boundary condition for time-dependent multiple scattering, Journal of Computational Physics, v.221 n.1, p.41-62, January, 2007
Andreas Atle , Bjorn Engquist, On surface radiation conditions for high-frequency wave scattering, Journal of Computational and Applied Mathematics, v.204 n.2, p.306-316, July, 2007
Bjrn Sjgreen , N. Anders Petersson, Perfectly matched layers for Maxwell's equations in second order formulation, Journal of Computational Physics, v.209 n.1, p.19-46, 10 October 2005
John Visher , Stephen Wandzura , Amanda White, Stable, high-order discretization for evolution of the wave equation in 1 Journal of Computational Physics, v.194 n.2, p.395-408, March 2004
Bradley Alpert , Leslie Greengard , Thomas Hagstrom, Nonreflecting boundary conditions for the time-dependent wave equation, Journal of Computational Physics, v.180 n.1, p.270-296, July 20, 2002
Isaas Alonso-Mallo , Nuria Reguera, Discrete absorbing boundary conditions for Schrdinger-type equations: practical implementation, Mathematics of Computation, v.73 n.245, p.127-142, January 2004
Jing-Rebecca Li , Leslie Greengard, High order marching schemes for the wave equation in complex geometry, Journal of Computational Physics, v.198 n.1, p.295-309, 20 July 2004
S. V. Tsynkov, Artificial boundary conditions for the numerical simulation of unsteady acoustic waves, Journal of Computational Physics, v.189 August
Stephen R. Lau, Rapid evaluation of radiation boundary kernels for time-domain wave propagation on blackholes: theory and numerical methods, Journal of Computational Physics, v.199 n.1, p.376-422, 1 September 2004
S. I. Hariharan , Scott Sawyer , D. Dane Quinn, A Laplace transorm/potential-theoretic method for acoustic propagation in subsonic flows, Journal of Computational Physics, v.185 n.1, p.252-270, 10 February
Dan Givoli , Beny Neta, High-order non-reflecting boundary scheme for time-dependent waves, Journal of Computational Physics, v.186 n.1, p.24-46, 20 March | high-order convergence;wave equation;approximation;radiation boundary condition;nonreflecting boundary condition;absorbing boundary condition;maxwell's equations;bessel function |
349174 | Convergence Rates for Relaxation Schemes Approximating Conservation Laws. | In this paper, we prove a global error estimate for a relaxation scheme approximating scalar conservation laws. To this end, we decompose the error into a relaxation error and a discretization error. Including an initial error $\omega(\ep)$ we obtain the rate of convergence of $\sqrt{\ep}$ in L1 for the relaxation step. The estimate here is independent of the type of nonlinearity. In the discretization step a convergence rate of $\sqrt{\Del x} $ in L1 is obtained. These rates are independent of the choice of initial error $\omega(\ep)$. Thereby, we obtain the order 1/2 for the total error. | Introduction
be the unique global entropy solution in the sense of Kru-zkov [11] to
the Cauchy problem for the conservation law
with initial data
(IR). The solution u satisfies Kru-zkov's
entropy conditions
IAN, Otto-von-Guericke-Universit?t Magdeburg, PSF 4120, D-39016 Magdeburg, Germany. Supported
by an Alexander von Humboldt Fellowship at the Otto-von-Guericke-Universit?t Magdeburg. Email:
hailiang.liu@mathematik.uni-magdeburg.de
y IAN, Otto-von-Guericke-Universit?t Magdeburg, PSF 4120, D-39016 Magdeburg, Ger-
many. Supported by the Deutsche Forschungsgemeinschaft (DFG), Wa 633 4/2, 7/1. Email:
gerald.warnecke@mathematik.uni-magdeburg.de
We are considering here a relaxation scheme, proposed by Jin and Xin [10] to compute
the entropy solution of (1.1) using a small relaxation rate ffl. Our main purpose is to
study the convergence rate of the relaxation scheme to the conservation laws as both the
relaxation rate ffl and the mesh length \Deltax tend to zero.
The relaxation model takes the form
The variables u ffl and v ffl are the unknowns, ffl ? 0 is referred to as the relaxation rate and a
is a positive constant. The system (1.4) was introduced by Jin and Xin [10] as a new way
of regularizing hyperbolic systems of the same kind as the scalar equation (1.1). It is also
the basis for the construction of relaxation schemes.
In fact, for small ffl, using the Chapman-Enskog expansion [4], one may deduce from
(1.4) the following convection-diffusion equation
x
see [10], which gives a viscosity solution to the conservation law (1.1) if the well known
subcharacteristic condition (cf. Liu [16]) holds:
a
Natalini [20] proved that the solutions to (1.4) converge strongly to the unique entropy
solution of (1.1) as ffl ! 0. Thus the system (1.4) provides a natural way to regularize the
scalar equation (1.1). This is in analogy to the regularization of the Euler equations by the
Boltzmann equation, see Cercignani [5].
Consider the grid sizes \Deltax, \Deltat in space and time as well as for n 2 IN, j 2 ZZ a numerical
approximate solution (u n
n\Deltat). The relaxation scheme associated with
the relaxation system (1.4) is given as
\Deltax is the mesh ratio,
a- and
" . We refer to Aregba-Driollet and
Natalini [1] for a class of fractional-step relaxation schemes for the relaxation system (1.4).
The first order scheme proposed there can be easily rewritten in the form of (1.6) with
where the homogeneous (linear) part is treated by some monotone
scheme and then the source term can be solved exactly due to its particular structure.
Throught this paper we assume the usual CFL condition
Convergence theory for this kind of relaxation schemes can be found in Aregba-Driollet
and Natalini [1], Wang and Warnecke [31] and Yong [32]. Based on proper total variation
bounds on the approximate solutions, independent of ffl, convergence of a subsequence of
to the unique week solution of (1.1) was established by standard compactness
arguments.
Currently, there are only very few computational results for relaxation schemes available
in the literature, see e.g. Jin and Xin [10], who introduced these schemes, as well as Aregba-
Driollet and Natalini [2]. Therefore, it is very hard to tell how useful they may be for
practical computations in the future. The main advantages of these schemes are that they
neither require the use of Riemann solvers nor the computation of nonlinear flux Jacobians.
This seems to become an important advantage when considering fluids with non-standard
equations of state, e.g. in multiphase mixtures. Note also that an extension of our results
to second order schemes seems easily feasible since we are only making use of the TVD
property which can be achieved using flux limiters, see Jin and Xin [10].
The relaxation approximation to conservation laws is in spirit close to the description of
the hydrodynamic equations by the detailed microscopic evolution of gases in kinetic theory.
The rigorous theory of kinetic approximation for solutions with shocks is well developed
when the limit equation is scalar. For works using the continuous velocity kinetic approxi-
mation, see Giga and Miyakawa [9], Lions, Perthame and Tadmor [17] and Perthame and
Tadmor [19], for discrete velocity approximation of entropy solutions to multidimensional
scalar conservation laws see Natalini [21], also Katsoulakis and Tzavaras [13]. Based on a
discrete kinetic approximation for multidimensional systems of conservation laws [21], the
authors in [2] constructed a class of relaxation schemes approximating the scalar conservation
laws.
It was pointed out by Natalini [21] that the relaxation system (1.4) can be rewritten into
the two velocities "kinetic" formulation by just setting defining
the Riemann invariants
a
a
and the Maxwellians
a
a
The relaxation model (1.4) becomes
@ t R ffl
2: (1.8)
The relaxation rate plays the role of the mean free path in kinetic theories. Indeed the
system (1.8) provides more insight into the properties of the relaxation system. In our
investigation of the convergence rates we will use this formulation for the relaxation model
as well as for the corresponding relaxation scheme.
The main goal of this paper is to improve on the previous convergence results, see
Aregba-Driollet and Natalini [1], Wang and Warnecke [31] and Yong [32] for the relaxation
scheme (1.6) by looking at the accuracy of the relaxation scheme for solving the conservation
law (1.1). We do this here by studying the error of approximation
between the exact
solution u and the numerical solution u ffl
measured in the L 1 norm. The parameters ffl and
\Deltax determine the scale of approximation and converge to zero as the scales become finer.
We shall call the order of this error in these parameters the convergence rate of the numerical
solution generated by the relaxation scheme.
To make this point precise, we choose the initial data for (1.4) as
continuous and
we allow for an initial error K(x)!(ffl) instead of v ffl
we want to see the
contribution of this initial error to the global error. We mention that it is possible to
consider perturbed data in the u-component, then in the final result an initial error ku ffl
would persist in time and may prevent the convergence of u ffl to the entropy
solution, as shown in Theorem 3.4. However, the initial error in the v-component persists
only for a short time of order ffl, thereby it does not prevent the convergence of u ffl .
We initialize the relaxation scheme (1.6) by cell averaging the initial data (u ffl
in the
usual way
Z
Here and elsewhere
R without the integral limit denotes the integral on the whole of IR,
and - j (x) denotes the indicator function - j (x) := 1 fjx\Gammaj \Deltaxj-\Deltax=2g .
Let us now introduce some notations. The L 1 -norm is denoted by k
denotes the total variation, defined on a
subset\Omega ' IR by
dx:
The BV-norm is defined as
For grid functions the total variation is defined by
denotes the discrete l 1 -norm
Taking initial data (1.9), we summarize our main convergence rate result by stating
Theorem 1.1. Take any T ? 0 and let for a suitable N 2 IN and time step \Deltat the relation
\Deltat be satisfied. Further let u be the entropy solution of (1.1)-(1.2) with initial
data u 0 in L 1 constant representation on
IR \Theta [0; T ] of the approximate solution (u n
generated by the relaxation scheme
with initial data satisfying (H 1 ) and (1.9). Then for fixed
\Deltax satisfying the CFL
condition (1.7) there exists a constant C T , independent of \Deltax; \Deltat and ffl, such that
\Deltax
Theorem 1.1 suggests that the accumulation of error comes from two sources: the
relaxation error and the discretization error. The theorem will be a consequence of Theorem
2.2, giving a rate of convergence to the unique entropy solution of (1.1) in the relaxation
step of the solutions to the relaxed system (1.4), as well as Theorem 2.3, giving
a discretization error bound for the relaxation scheme (1.6) as an approximation to the
relaxation system (1.4).
It may be helpful, at the outset, to explain the structure of the proof. Consider that the
relaxation scheme was designed through two steps: the relaxation step and the discretization
step. Our basic idea is to investigate the error bound of the two steps separately and
then the total convergence rate by combining the relaxation error and discretization error.
The basic assumptions and the error bounds of the two steps will be given in detail in
Section 2.
We split the error e ffl
relaxation error e ffl with
ffl,
and a discretization error e \Delta with
\Deltax, i.e., we have the decomposition
with
In order to get the desired approximate entropy inequality, we work with the reformulated
system (1.8), in place of the original system (1.4).
We would like to mention that an analogous result for a class of relaxation systems was
already obtained by Kurganov and Tadmor [12] by using the Lip 0 -framework initiated by
Nessyahu and Tadmor [22]. But their argument uses the convexity of the flux function.
For the case of a possibly nonconvex flux function f , our work uses Kuznetzov-type error
estimates, see [15] and [3]. Recently, Teng [29] proved the first order convergence rate for
piecewise smooth solutions with finitely many discontinuities with the assumption of convex
fluxes f(u). Based on Teng's result, Tadmor and Tang [28] provided the optimal pointwise
convergence rate for the relaxation approximation to convex scalar conservation laws with
piecewise solutions. They use an innovative idea that they introduced in their paper [27]
which enables them to convert a global L 1 error estimate into a local error estimate.
In the discretization step the same error bound was obtained by Schroll, Tveito and
Winther [25] for a model that arises in chromatography, their argument is in the spirit of
Kuznetsov [15] and Lucier [18]. The results in [25] rely on the assumption that the initial
data are close to an equilibrium state of order ffl, i.e. Our result shows that the
uniform estimate does not depend on !(ffl), which is more natural since in the discretization
step ffl is kept a constant. Taking discretization step, we immediately recover
the optimal convergence rate of order 1=2 for monotone schemes, see Tang and Teng [26],
Sabac [23]. We thank a referee for pointing out the possibility of an extension of the present
arguments to multi-speeds kinetic schemes introduced in [2], even in the multidimensional
case [21].
The paper is organized as follows. In Section 2 we state the assumptions on the system
and recall properties of the solutions to (1.4) and the scheme (1.5)-(1.6). The main
results on the relaxation error and discretization error are given. Their proofs are presented
in Section 3 and Section 4, respectively. The authors thank a referee for bringing the paper
[14] to their attention. In that paper, a discrete version of the theorem by Bouchut and
Perthame [3], see Theorem 3.3 in Section 3, is established and convergence rates for some
relaxation schemes based on the relaxation approximation proposed in [13] were considered.
Preliminaries and Main Theorems
In this section we review some assumptions and the analytic results concerning the relaxation
model and relaxation schemes. After further preparing the initial data, we state our
main theorems.
Let us first recall some results obtained by Natalini [20] concerning the analytical properties
of the problem (1.4) with the specific initial data (u ffl
0 ), which will be of use in our
error analysis. Let us make the following assumptions:
the flux function f is a C 1 function with
data satisfy
there exist constants
not depending on ffl such that
sup
ffl?0
ffl?0
and for the flux function f as well as K given in (H 1 )
x6=y
us define, for any ae ? 0, the notations
F (ae) := sup
and
Theorem 2.1. (Natalini [20]) Assume
a
then the system (1.4) admits a unique, global solution
loc
isfying
for all ffl ? 0 and for almost every (x; t) 2 IR\Theta]0; 1[.
We refer to Natalini [20] for detailed discussions on the existence, uniqueness and convergence
of solutions to the relaxation model (1.4).
Equipped with assumptions in (H 1 )-(H 3 ), it has been proved that, as
solution sequence to (1.4) converges strongly to the unique entropy solution of (1.1)-(1.2),
see Natalini [20]. We will study the convergence rate in Section 3, our main result on the
limit ffl # 0 is summarized in the following theorem.
Theorem 2.2. Consider the system (1.4), subject to L 1
data satisfying (H 1 )-(H 3 ). Then the global solution converges to (u; f(u)) as ffl # 0
and the following error estimates hold,
Thus, (2.5) reflects two sources of error: the initial contribution of size !(ffl) and the
relaxation error of order ffl. It should be mentioned that, however, the effect of initial
contribution persists only for a short time of order ffl, beyond this time the nonequilibrium
solution approaches a state close to equilibrium at an exponential rate. Note that the proof
of estimate (2.5) is included in the proof of Lemma 3.2 below. The proof of (2.4) will follow
from Theorem 3.4.
Now we turn to the formulation of the relaxation schemes.
In order to approximate the solutions of the initial value problem (1.4), we first discretize
the initial data (u ffl
To make this more precise, we
denote a family of approximate solutions given as piecewise constant functions, dropping
the superscript ffl for notational convenience, by
represents the number of time steps performed. For the initial conditions
we take the orthogonal projection of the initial data (u ffl
the space of
piecewise constant functions on the given grid
\Deltax
Z
\Deltax
Z
Thus it follows from (H 1 that the discrete initial data satisfy
(a)
(b)
(c)
The grid parameters \Deltax and \Deltat are assumed to satisfy \Deltat
\Deltax
Const. We note that, since
assumed constant, \Deltax ! 0 implies \Deltat ! 0 as well.
It is well known that the projection error is of order \Deltax. More precisely, we have
As was shown by Aregba-Driollet and Natalini [1] as well as Wang and Warnecke [31], for
a large enough constant a a uniform bound for the numerical approximations given by the
scheme (1.6) can be found. Precisely, there exists a positive constant M(ae 0 ) such that if
a
then the numerical solution satisfies
ae
a
oe
where B(ae 0 ) is a constant depending only on ae 0 .
Starting with the discrete initial data satisfying (2.9)-(2.11), by using the Riemann
invariants, the TVD bound of the approximate solutions of (1.6) was proved previously
by Aregba-Driollet and Natalini [1], Wang and Warnecke [31] and Yong [32]. By Helly's
compactness theorem, the piecewise constant approximate solution (u
converges strongly to the unique limit solution (u; v)(x; t n ) as we refine
the grid taking \Deltax # 0. This, together with equi-continuity in time and the Lax-Wendroff
theorem, yields a weak solution of the relaxation system (1.4). We note that the
initial bounds (2.9)-(2.11) still hold for the piecewise constant numerical data (u 0
which
We refer to [1], [31] and [32] for the convergence of approximate solutions (u
the unique weak solution of (1.4). Our goal here is to improve the previous convergence
theory by establishing the following L 1 -error bound of the relaxation scheme. This theorem
will be proved in Section 4.
Theorem 2.3. Let be the weak solution of (1.4) with initial data (u ffl
be a piecewise constant representation of the data (u N
generated by (1.6)
starting with (u 0
fixed there is a finite constant C T
independent of \Deltax, \Deltat and ffl such that
\Deltax:
Remark 1. Our uniform error bound is independent of the relaxation parameter ffl and
initial error !(ffl). This is more general than the result obtained by [25] assuming the initial
error !(ffl) to be ffl. 2
As remarked earlier, the error is split into a relaxation error e ffl and a discretization
error e \Delta , combining the two errors we arrive at the desired total error for the scheme (1.6)
as stated in Theorem 1.1.
3 Relaxation Error
In this section we establish Kuznetzov-type error estimates for the approximation of the
entropy solution of the scalar conservation law
by solutions of the relaxation system
For the above purpose we have to show that u ffl satisfies an approximate entropy con-
dition. Let us begin with rewriting (3.2) in terms of the Riemann invariants (R ffl
Maxwellians defined in the introduction and recall some basic facts that
will be used in our analysis. It is easy to see that
a
Then the system is rewritten as (1.8), i.e.,
@ t R ffl
\Theta
where the functions M i (u) have the following propertiesX
We note that the L 1 estimate obtained by Natalini [20] implies that
ae
a
oe
Further, we set
I ae 0
g:
The M i (u) are monotone (non-decreasing) functions of
because due to the subcharacteristic
condition
d
du
a
0:
Thus we have for any u ffl and ~
in I ae 0X
Starting with the initial distribution
a
a
the model evolves according to the system (1.8), which is well-posed. In fact, we rewrite
(1.8) in the form
@ t R ffl
2: (3.8)
From (3.8) one obtains the L 1 -contraction property using a Gronwall inequality, see [21],
which isX
As shown in [20], the above nice property for general data of bounded variation yields the
following
Lemma 3.1. For any ae 0 ? 0 and ffl ? 0, if
a
then the global solution (R ffl
loc of the problem (1.8), (3.7) for any
satisfies the entropy-type inequality
- 0; in D
Proof. See Natalini [21, Proposition 3.8].
Before establishing the desired convergence rate in Theorem 2.2, we first need a Lemma
giving a bound on the distance of (R ffl
2 ) from equilibrium. This bound is actually equivalent
to the estimate (2.5). The following lemma is a generalization of Natalini [20,
Proposition 4.7].
Lemma 3.2. Suppose that the assumptions (H 1 be a solution of
with initial data (u ffl
holds that
Proof. In view of the relations between (R ffl
which serves as a measure of the distance from equilibrium. Let - then the
function - ffl satisfies
x
for data
Then multiplying by sgn(- ffl )e t
ffl and integrating over IR \Theta [0; t] one gets
Z
Z
Z tZ
e
\Gamma(t\Gammas)
x jdxds:
Note that by (3.10) we have for sufficiently regular initial data
For example one could use C 1 data in combination with an approximation theorem in
BV (IR), see the Theorem 1.17 in Giusti [8]. Combining the above facts, for general data
of bounded variation, gives
Together with (H 3 ) and (3.12), this estimate implies the result as asserted.
Equipped with Lemmas 3.1-3.2, now we turn to the proof of our main result.
Our further analysis uses a result by Bouchut and Perthame [3, Theorem 2.1]. We first
state in a less general form their central result.
Theorem 3.3. (Bouchut and Perthame) Let u; v 2 L 1
loc
loc
(IR)) be right continuous
with values in L 1
loc (IR). Assume that u satisfies the entropy condition (1.3) and v
satisfies, for all k 2 IR,
are locally finite Radon measures such that for some nonnegative k-independent
Radon measures ff J and ff H 2 L 1
loc ([0; 1[; L 1
loc (IR)), satisfy in the sense of measures
Then for any T ? 0; x and the balls
we have
Z
Z
where C is a uniform constant and
sup
Z
ff J (x; t)dx;
Z TZ
Based on this general result we will prove
Theorem 3.4. Under the assumptions of Lemma 3.2. Let -
u be the entropy solution of
(1.1) with initial data - u 0 (x). Then, for any fixed T ? 0 and all t - T ,
ffl:
Here, C is a positive constant depending on the flux function f and the L 1 -norm of the
data
Proof. Since M i (u) is monotone for we have by (3.6)
where
The term J ffl is bounded from above by
due to the inverse triangle
inequality. Let
ff J =X
then we have jJ ffl j - ff J and by Lemma 3.2
loc ([0; 1[; L 1
loc (IR)) with kff J
Setting
we similarly get using (3.4)
Due to the monotonicity property of M i , we have
Using this fact and the inverse triangle inequality one obtains
Therefore, we also have
loc ([0; 1); L 1
loc
Combining the previous expressions (3.15) and (3.17) yields
\Theta
However by Lemma 3.1 the Riemann invariants (R ffl
the entropy-type inequalities
consequently one obtains
\Theta
The functions J are bounded by the L 1 -functions ff J , ff H respectively as required in
(3.14). Since the solutions
are bounded and the flux function f is assumed to be
Lipschitz, we may apply Lemma 3.3. Letting ae !1, Lemma 3.3 gives
Z
Z TZ
a
Z
to minimize the right hand side of (3.19), then
we have by choosing a suitably large constant C T , including k-u 0 k BV , that
r
sup
Note that the estimate in Lemma 3.2 yields
If !(ffl) - ffl, the above in combination with (3.20) yields Theorem 3.4.
Next we treat the case !(ffl) ? ffl. To obtain the desired estimate we again apply Lemma
3.3 on the interval [-; T ] with - ? 0 to be determined, thus we have as above
r
sup
Using the fact that both u ffl and -
u lie in a bounded subset in Lip(IR
and Tzavaras [13] and Smoller [24], we have
It follows from the estimate in Lemma 3.2 that
Taking applying (3.22) and (3.23) into (3.21) one gets
choosing easily recover the order 1=2 estimate from (3.24) and
ffl:
This completes the proof.
4 Discretization Error
The purpose of this section is to derive the error estimate given in Theorem 2.3. Let us
define the computational cells as
I
We will prove that the error bound for the relaxation scheme (1.6) approximating the
relaxation system (1.4) is of order
\Deltax in L 1 , without requiring that the initial data be
close to equilibrium.
To this end, let us rewrite the relaxation scheme as a splitting or fractional step method
in terms of the Riemann invariants (R ffl
Dropping the superscript ffl and noting that
ffl , the scheme takes the form
R n+1
for the source terms while the intermediate states (R n+1=2
2;j are generated by the
following consistent monotone scheme in conservative form for the convections, namely the
upwind scheme,
R n+1=2
R n+1=2
The discrete values (R n
computed from (4.1) are considered as approximations of
i in the whole cell I j at time t n\Deltat, which can also be obtained through (u n
generated by (1.6). The discrete variables are related to each other by
a
a
and
a
a
Conversely
can also be expressed as
For the initial conditions R 0
i;j we take the orthogonal projection of the initial data R ffl
onto the space of piecewise constant functions on the given grid. It is given by the averages
\Deltax
Z
for each integer j.
In the previous studies concerning relaxation schemes, some important properties for
the numerical scheme were obtained through investigating the reformulated scheme using
the Riemann invariants. These properties include: the L 1 boundedness, the TVD property
and the L 1 continuity in time. Here we also rewrite the relaxation scheme in terms of the
Riemann invariants because they provide more insight into the convergence behavior of the
scheme. The above properties for our scheme are summarized as follows and will be used
in the later error analysis.
Lemma 4.1. Suppose that the initial conditions (u ffl
are bounded and of bounded vari-
ation, both uniformly with respect to ffl. Further, assume that the initial data also satisfy
i.e. as a consequence
then there exists a constant C 0 , independent of ffl and \Deltat such that for
ffl and K ae 0
as
defined by (3.5)
(a) (R n
; for all (j; n) 2 ZZ \Theta IN;
(b)
(R n
(c) kM i
a \Deltat
(d)
Proof. The proof of (a) can be found in Wang and Warnecke [31]. The proof of (b) and
(d) is straightforward, see Aregba-Driollet and Natalini [1] and Yong [32]. Since here we
choose initial data satisfying (H 1 ) instead of v 0
(c) as follows.
It follows from (4.3) and (4.4) that Z n
Summation of the scheme
and noting that u n
i;j , we obtain
Again let us consider (4.1) for
R n+1
Adding \GammaM 1 (u n+1
j ) on both sides gives
\GammaZ n+1
by which we obtain
Using the first equation of scheme (4.2) and the definition of Z n
Z n+1=2
Then by the Mean Value Theorem there exists a value ~
between u n+1=2
and u n
such that for 0 -
due to the monotonicity property of M 1 (u). Substituting (4.9) into (4.8), then using the
relation (4.5) and the scheme (4.2) gives
Z n+1=2
By summation over j and multiplying by \Deltax one obtains from (4.7) and using (b)
a-\DeltaxX
(R n
a\Deltat: (4.10)
From (4.10) it is easy to verify by iteration and the geometric sum that
a
\Deltat
\Gamman ]:
This completes the proof of Lemma 4.1.
In the error analysis, we have to consider the finite difference solution as a function
defined in the whole upper-half plane, and t - 0, not only at the mesh points. There are
very simple ways of constructing a step function corresponding the grid function. For the
analysis we want to construct a different kind of approximate function that automatically
satisfies a convenient form of an entropy inequality. For this purpose we use the well known
interpretation of the upwind scheme (4.2) as the Godunov scheme. To accomplish this, we
construct a family of functions iteratively. Using piecewise constant data for the averages
R i this construction is actually equivalent to taking a characteristic scheme since we have
linear transport equations, see Childs and Morton [6], followed by an implicit step like (4.1)
for the source term. This is like a splitting method using the Godunov operator splitting.
We follow the approach used by Schroll et al. [25]. For notational clarity, we will use the
variables (y; -) instead of (x; t) in the context below.
The iteration is initialized by
Z
defined as the characteristic function for the interval I j .
(i) We use the notation t \Sigma
n to denote limits from above or below. In the interval (t
R i is the solution of the linear equation
with initial data R i
(ii) At t n+1 we project back onto the mesh by taking cell averages
2: (4.12)
(iii) The initial data for the next iteration are defined by the implicit formula treating the
source term
(4.13)From the definition of M i (u) it follows by (4.5) using (4.13) that
Thus the R i
n+1 ) can actually be obtained in the explicit form
Assuming R i
we conclude from the integral form of (4.11)
on the rectangle I j \Theta [t
Thus our step functions R respresenting the grid functions possess the following
property, due to -
2: (4.16)
The discrete estimates in Lemma 4.1 and the relations (4.16) yield the following properties
of R i .
Lemma 4.2. For all t, -
(a) (R 1
(b)
(c)
(d)
\Deltat).
Proof. The relations (4.16) imply directly that (c)-(d) hold. For the proof of (e), one has
to use the time-L 1 Lipschitz continuity of R i , we refer to [25] for a similar analysis and
omit the details.
The solution to the linear transport equations is unique. We do not need an entropy
inequality in order to impose uniqueness. Still, we may use a discrete version of the entropy
inequalities in order to study the convergence rate of R i to R i as \Deltax # 0.
is a weak solution (also an entropy solution due to its uniqueness) of the
system (4.11) in (t n ; t n+1 ), the Kru-zkov-type entropy formulation is valid. This means that
for all values (q
and all nonnegative C 1 -function OE with compact support in
On the other hand, from the step (iii) above we observe that for any (q 1
(R
\Theta M i
sgn
Inserting (4.18) into (4.17), summing over n from using R i
the following entropy inequality for the discrete solution
\Theta M i
Inspired by the papers [30] and [25] of Tveito and others, we use the following Kru-zkov-
type inequality for our problem since the exact solution R i (x; t) is the unique weak solution
of (1.8),
\Theta M i (u(x;
for any constants (q
and all
As in Kru-zkov [11] any weak
solution to (1.8) satisfies (4.20).
Equipped with the above variational inequalities, we return to estimate the error bound
of jjR
Proof of Theorem 2.3. Our proof in this section is inspired by the one given by
Schroll et al. [25]. They show an L 1 -error bound for a model arising in chromatography.
Their argument is in the spirit of the work by Kru-zkov[11], Kuznetsov[15] and Lucier[18].
To obtain the desired error bound we need to combine the inequality (4.19) with (4.20).
To this end we define a mollifier function
where j is any nonnegative smooth function with support in [\Gamma1; 1], even, i.e.
and unit mass. This mollifier therefore satisfies
Z
Z
Z
We proceed by selecting the constants (q 1
and the test functions OE; /. First taking q
and integrating in x and t, we obtain using R i
Z TZ Z TZ 2
dyd-dxdt
Z
\Theta
\Deltat
Z TZ N
sgn
In the inequality (4.20) we set q
over sum n from 1 to N
dxdtdyd-
sgn
\Delta\Theta
Adding this inequality to (4.21) and suitably grouping the terms we obtain an inequality
that we write in the short hand form
The individual expressions are
Z TZ N
\Theta
dyd-dxdt;
Z TZ N
Z
expressions we have the following bounds
and postpone their proof for the moment.
Lemma 4.3. For any T ? 0, there exists a positive constant C, independent of step sizes,
relaxation time ffl, and ffi such that
Equipped with these above estimates we continue the proof of the Theorem 2.3. Using
4.3 we find for a suitably large constant C ? 0X
Using an obvious bound for the initial error like (2.11), we haveX
(R i (\Delta; 0))\Deltax - C \Deltax (4.24)
Inserting this into (4.23) and picking
\Deltat to minimize the right hand side of the
relation (4.23), we haveX
Using the CFL condition \Deltat - \Deltax= p
a we haveX
\Deltax: (4.25)
Returning to the original variables (u; v) and noting the fact that we have
R N
we obtain
\Deltax:
This completes the proof of Theorem 2.3. 2
Finally, we return the proof of Lemma 4.3 in order to conclude this paper.
Proof of Lemma 4.3. The proof of (i)-(iii) and (v) can be done by an analogous
analysis as in the proof for the chromatography model given by Schroll et al.[25]. We next
estimate the term E 3 (ffi). Let us analogously as above use the notations
can be rewritten as
Z TZ N
Z TZ N
The first term in E 3 (ffi) is nonpositive due to the relations (3.3) and (3.6)X
uj \GammaX
Therefore
Z TN
Due to Lemma 3.2 the sum
ffl ]. The
integral of this sum over [0; T ] is clearly bounded by C ffl. Noting that
Z t\Gamma-
we find the estimates
\Deltat
\Deltat
which completes the proof. 2
Remark 2. The above proof uses the exponential rate
ffl without resorting to the
restriction on the initial error. To see this, note that fromX
we can easily deduce the boundffl
Z
Here, we do not use any specific choice of !(ffl). We mention that the authors in [25] obtained
the same error bound in the discretization step by assuming that
--R
Convergence of relaxation schemes for conservation laws
Discrete kinetic schemes for multidimensional conservation laws
The Mathematical Theory of Nonuniform Gases
The Boltzmann equation and its applications
Characteristic Galerkin methods for scalar conservation laws in one dimension
Hyperbolic conservation laws with stiff relaxation terms and entropy
Minimal surfaces and functions of bounded variation
A kinetic construction of global solutions of first order quasi-linear equations
The relaxation schemes for systems of conservation laws in arbitrary space dimensions
Stiff systems of hyperbolic conservation laws.
Contractive relaxation systems and scalar mul- tidimentional conservation laws
Accuracy of some approximate methods for computing the weak solutions of a first order quasilinear equation
Hyperbolic conservation laws with relaxation
A kinetic formulation of multidimensional scalar conservation laws and related equations
bounds for the methods of Glimm
A kinetic equation with kinetic entropy functions for scalar conservation laws
Convergence to equilibrium for the relaxation approximations of conservation laws
A discrete kinetic approximation of entropy solutions to multidimensional scalar conservation laws
The convergence rate of approximate solutions for nonlinear scalar conservation laws
The optimal convergence rate of monotone finite difference methods for hyperbolic conservation laws
New York
The sharpness of Kuznetsov's O( p
convergence rate for scalar conservation laws with piecewise smooth solutions
Pointwise error estimates for relaxation approximations to conservation laws
On the rate of convergence to equilibrium for a ssystem of conservation laws with a relaxation term
Convergence of relaxing schemes for conservations laws
Numerical analysis of relaxation schemes for scalar conservation laws
--TR
--CTR
H. Joachim Schroll, High Resolution Relaxed Upwind Schemes in Gas Dynamics, Journal of Scientific Computing, v.17 n.1-4, p.599-607, December 2002
Tao Tang , Jinghua Wang, Convergence of MUSCL Relaxing Schemes to the Relaxed Schemes for Conservation Laws with Stiff Source Terms, Journal of Scientific Computing, v.15 n.2, p.173-195, June 2000
A. Chalabi , Y. Qiu, Relaxation Schemes for Hyperbolic Conservation Laws with Stiff Source Terms: Application to Reacting Euler Equations, Journal of Scientific Computing, v.15 n.4, p.395-416, December 2000
Mapundi Kondwani Banda, Variants of relaxed schemes and two-dimensional gas dynamics, Journal of Computational and Applied Mathematics, v.175 n.1, p.41-62, 1 March 2005 | convergence rate;relaxation model;relaxation scheme |
349319 | Improved spill code generation for software pipelined loops. | Software pipelining is a loop scheduling technique that extracts parallelism out of loops by overlapping the execution of several consecutive iterations. Due to the overlapping of iterations, schedules impose high register requirements during their execution. A schedule is valid if it requires at most the number of registers available in the target architecture. If not, its register requirements have to be reduced either by decreasing the iteration overlapping or by spilling registers to memory. In this paper we describe a set of heuristics to increase the quality of register-constrained modulo schedules. The heuristics decide between the two previous alternatives and define criteria for effectively selecting spilling candidates. The heuristics proposed for reducing the register pressure can be applied to any software pipelining technique. The proposals are evaluated using a register-conscious software pipeliner on a workbench composed of a large set of loops from the Perfect Club benchmark and a set of processor configurations. Proposals in this paper are compared against a previous proposal already described in the literature. For one of these processor configurations and the set of loops that do not fit in the available registers (32), a speed-up of 1.68 and a reduction of the memory traffic by a factor of 0.57 are achieved with an affordable increase in compilation time. For all the loops, this represents a speed-up of 1.38 and a reduction of the memory traffic by a factor of 0.7. | Introduction
Software pipelining [9] is an instruction scheduling technique that exploits instruction level
parallelism (ILP ) out of a loop by overlapping operations from various successive loop
iterations. Different approaches have been proposed in the literature [2] for the generation
of software pipelined schedules. Some of them mainly focuses on achieving high throughput
[1, 11, 16, 22, 23, 25]. The main drawback of these aggressive scheduling techniques is their
high register requirements [19, 21]. Using more registers than available requires some actions
which reduce the register pressure but may also degrade the performance (either due to the
additional cycles in the schedule or due to additional memory traffic). For this reason, other
proposals have also focused their attention in the minimization of the register requirements
[12, 15, 20, 27].
Register allocation consists on finding the final assignment of registers to loop variables
(variants and invariants) and temporaries. It has been extensively studied in the framework
of acyclic schedules [3, 5, 6, 7] based on the original graph coloring proposal [8]. However,
software pipelining imposes some constraints that inhibit the use of these techniques for
register allocation. Although there have been proposals to handle them [13, 14, 24], none of
them deals with the addition of spill code (and its scheduling) that is needed to reduce the
register pressure in software pipelined loops.
Any software pipeliner fails if it generates a schedule that requires more registers than
those available in the target machine. In this case, some additional actions have to be
performed in order to alleviate the high register demand [24]. One of the options is to
reschedule the loop with a reduced execution rate (i.e. with less iteration overlapping);
this reduces the number of overlapped operations and variables. Unfortunately, the register
reduction may be at the expense of a reduction in performance. Another option is to spill
some variables to memory, so that they do not occupy registers for a certain number of clock
cycles. This requires the insertion of store and load instructions that free the use of these
registers. The evaluation performed in [18] shows that reducing the execution rate tends to
generate worse schedules than spilling variables; however, the authors show that in a few
cases the opposite situation may happen.
Several aspects contribute to the quality of the spill code generated by the compiler. The
first one is deciding if the spill code applies to all the uses of a variable or just to a subset.
The second aspect relates to the selection of spilling candidates, which implies deciding the
number of variables (or uses) selected for spilling and the priority function used to select
among them. Both decisions need accurate estimates of the benefits that the selection of a
spilling candidate will produce in terms of register pressure reduction.
In order to motivate this work, Table 1 shows, for two different spill algorithms, the average
execution rate (cycles between the initiation of two consecutive iterations) and average
memory traffic (number of memory accesses per iteration) for all the loops in our workbench
(Section 2.4) whose schedule does not fit in 32 registers and for one of the processor
configurations used along this paper. The table also includes the ideal case (i.e. when infinite
registers are available and no spill is needed). Notice that the gap between the two
implementations (one commercial, as described in [26] and the other experimental [18]) and
the ideal case is large. These results motivated the proposal of new heuristics to improve
the whole register pressure reduction process; the last column in the same table shows the
results after applying the heuristics proposed in this paper, which represent more than 40%
reduction in the execution rate and memory traffic with respect to previous proposals.
In this paper we use a register-conscious pipeliner, named HRMS [20], to schedule the
loops. Once the loops are scheduled, register allocation is performed using the wands-
only strategy with end-fit and adjacency ordering [24]. Then, the register requirements are
decreased if required. The paper contributes with a set of heuristics to: 1) decide between the
two different possibilities aforementioned (adding spill code or directly decrease the execution
rate); and 2) do a better selection of spilling candidates (both in terms of assigning priorities
to them and selecting the appropriate number). The paper also contributes with an analysis
of the results when spill of variables or uses is performed. The different proposals are
compared against the ideal case (which is an upper bound for performance) and against the
Metric Ideal [18] [26] This paper
avg. execution rate 12.01 28.32 29.43 20.66
avg. memory traffic 15.38 50.88 52.13 35.71
Table
1: Motivating example for improving the spill process.
proposals presented in [18]. The workbench is composed of all the loops from the Perfect
[4] that are suitable for software pipelining.
The paper is organized as follows. Section 2 makes a brief overview of modulo scheduling,
register allocation and spill code for modulo scheduling. Section 3 focuses on the different
steps and proposals for spilling variables to memory. Then, Section 4 presents different alternatives
to select the spilling candidates in a more effective way, and analyze the trade-off
between reducing the execution rate or adding spill code. In Section 5 the different alternatives
and heuristics are evaluated in terms of dynamic performance, taking into account the
relative importance of each loop in the total execution time of the benchmark set. Finally,
Section 6 states our conclusions.
2 Basic concepts
2.1 Modulo scheduling
In a software pipelined loop, the schedule of an iteration is divided into stages so that the
execution of consecutive iterations, that are in distinct stages, are overlapped. The number
of stages in one iteration is termed Stage Count (SC). The number of cycles between the
initiation of successive iterations in a software pipelined loop determines its execution rate
and is termed the Initiation Interval (II).
The execution of a loop can be divided into three phases: a ramp up phase that fills the
software pipeline, a steady state phase where the maximum overlap of iterations is achieved,
and a ramp down phase that drains the software pipeline. During the steady state phase,
the same pattern of operations is executed in each stage. This is achieved by iterating on a
piece of code, named the kernel, that corresponds to one stage of the steady state phase.
The II is bounded by recurrence circuits in the dependence graph of the loop (RecMII)
or by resource constraints of the target architecture (ResMII). The lower bound on the II is
termed the Minimum Initiation Interval ResMII)). The reader is
referred to [11, 25] for an extensive dissertation on how to calculate RecMII and ResMII.
In order to perform software pipelining, the Hypernode Reduction Modulo Scheduling
(HRMS) heuristic [20] is used. HRMS is a software pipeliner that achieves the MII for a
large percentage of the workbench considered in this paper (97.4 % of loops). In addition,
it generates schedules with very low register requirements. A register-sensitive software
pipelining technique has been used in order to not overestimate the necessity of spill code.
The scheduling is performed in two steps: a first step that computes the priority of operations
to be scheduled and a second step that performs the actual placement of operations in the
modulo reservation Table.
2.2 Register allocation
Once a loop is scheduled, the allocation of values to registers is performed. Values used in a
loop correspond either to loop-variant or loop-invariant variables. Invariants are repeatedly
used but never defined during the execution of the loop. Each invariant has only one value
for all iterations of the loop, and therefore requires a single register during the execution of
the loop (regardless of the scheduling and the machine configuration).
For loop variants, a new value is generated in each iteration of the loop and, therefore,
there is a different lifetime (LT) [15]. Because of the nature of software pipelining, the LT
of values defined in an iteration can overlap with the LT of values defined in subsequent
iterations. The LT of loop variants can be measured in different ways depending on the
execution model of the machine. We assume that a variable is alive from the beginning of
the producer operation until the start of the last consumer operation.
By overlapping the LT in different iterations, a pattern of length II cycles, that is indefinitely
repeated, is obtained. This pattern indicates the number of values that are live at any
given cycle. The maximum number of simultaneously live values (MaxLive) is an accurate
approximation of the number of registers required for the schedule [24].
Variants may have LT values greater than II ; this poses an additional difficulty since
new values are generated before previous ones are used. One approach to fix this problem
is to provide some form of register renaming so that successive definitions of the same value
use distinct registers. Renaming can be performed at compile time using modulo variable
expansion (MVE) [17] (i.e. unroll the kernel and rename the multiple definitions of each
variable that exist in the unrolled kernel). Rotating register files provide a hardware solution
to solve the same problem without replicating code [10] (i.e. the renaming of the different
instantiations of a loop-variant is done at execution time).
In our study and implementation, we assume the existence of a rotating register file and
use the wands-only strategy using end-fit with adjacency ordering [24]. This strategy usually
achieves a register allocation that uses MaxLive registers and almost never requires more
than MaxLive registers. However, the heuristics proposed in this paper are applicable
regardless of the hardware model and the register allocation strategy used.
2.3 Decreasing the register requirements
The register allocation techniques for software pipelined loops [24] assume an infinite number
of registers. From now on we name Used Registers (UR) the number of registers required
to execute a given schedule and Available Registers (AR) the number of registers available
in the target architecture.
If UR is greater than AR, then the obtained schedule is not valid for the target processor.
In this case, the register pressure must be decreased so that the loop can be executed (e.g.
we must obtain a schedule so that UR - AR). Different alternatives to decrease the register
requirements have been outlined in [24]: 1) to reschedule the loop with a larger II; 2) to spill
some variables to memory; or 3) to split the loop into several smaller loops. To the best
of our knowledge, loop splitting has not yet been evaluated for the purpose of decreasing
the register pressure. The other two alternatives have been evaluated and compared in [18]
and are used by production compilers (e.g. the Cydra5 compiler increases the II [11], and
the MIPS compiler, as described in [26], adds spill code). Next we summarize the main
conclusions from the comparison:
ffl Rescheduling the loop with a bigger II usually leads to schedules with less iteration
overlapping, and therefore with less register requirements. Unfortunately, the UR
decrease is directly at the expense of a reduction in performance (less parallelism is
exploited). In addition, for some loops is not possible to find a valid schedule with
UR - AR by simply increasing the II.
ffl Spilling variables to memory makes available their associated registers for other values.
This spill requires the use of several load and store operations and may saturate the
memory units, turning the loop into a memory-bounded loop; in this case, the addition
of spill code leads to an increase of the II and to a degradation of the final performance.
Increasing the II produces, in general, worse schedules than adding spill code. However, for
some loops the first option is better. This suggests that a hybrid method that in some cases
adds spill code and in others increases the II can produce better results. For instance, [27]
spills as many uses as possible without increasing the II (i.e. it tries to saturate the memory
buses). If the schedule obtained does not fit in AR, then the II is increased. Although
this heuristic always ends up with a valid schedule, it does not care about minimizing the
memory traffic (in fact it may increase memory traffic for loops that do not require spill).
In this paper we present a heuristic to allow the bypassing of the step to add spill code in
some cases and simply increase the II. The paper also proposes several new heuristics for
adding spill code. These heuristics allow for a better tuning of the final schedule so that the
performance degradation is reduced as well as the memory traffic overhead.
2.4 Experimental framework
The different proposals of the paper are evaluated on a set of architectures PiMjLk defined
as follows: i is the number of functional units used to perform each kind of computations
(adders, multipliers and div/sqr units); j is the number of load/store units; and k is the
latency of the adders and the multipliers. In all configurations, the latency of load and store
accesses is two and one cycles, respectively. Divisions take 17 cycles and square roots take
cycles. All functional units are fully pipelined, except for the div/sqr functional units. In
particular, four different configurations are used: P2M2L4, P2M2L6, P4M2L4 and P4M4L4,
with registers each.
In order to evaluate the heuristics proposed, a total of 1258 loops that represent about
80% of the total execution time of the Perfect Club [4] (measured on a HP-PA 7100) have
been scheduled. First of all we evaluate the effectiveness of our proposals (Section 4); for
this evaluation we use only those loops for which UR ? AR. The number of loops that
fulfills this condition, for the different processor configurations aforementioned, is shown in
Table
2. Notice that when 64 registers are available, the number of loops that do not fit in
the AR is very small (and therefore subject to the variance of the heuristics themselves). As
a consequence, the main conclusions of our experimental evaluation will be drawn for the
configurations with however, results for 64 registers will be used to confirm the
trend. Then we evaluate the real impact on performance taking into account all the loops
in the workbench (Section 5).
AR P2M2L4 P2M2L6 P4M2L4 P4M4L4
Table
2: Number of loops that require more registers than available for a set of processor configurations
PiMjLk.
The metrics used to evaluate the performance are the following:
ffl \SigmaII , which measures the sum of the individual II for all the loops considered.
ffl \Sigmatrf , which measures the sum of the individual number of memory operations used in
the scheduling.
ffl SchedTime, which measures the time to schedule the loops.
Adding spill code
The initial algorithm that we use for generating register constrained modulo schedules is
the iterative algorithm shown in Figure 1. After scheduling and register allocation, if a
loop requires more registers than those available, a set of spilling candidates is obtained and
ordered. The algorithm then decides how many candidates are finally selected for spilling
and introduces the necessary memory accesses in the original dependence graph. The loop
is rescheduled again because modulo schedules tend to be very compact -the goal is to
saturate the most used resource- and it is very difficult to find empty slots to allocate the
new memory operations in the modulo reservation table. The process is repeated until a
schedule requiring no more registers than available is found. To the best of our knowledge,
all previous spilling approaches are based on a similar iterative algorithm [18, 26, 27].
In the following subsections we describe in more detail each one of these aspects and
present the solutions proposed by previous researchers.
3.1 Variables and uses
The lifetime of a variable spans from its definition to its last use. The lifetime of a variable
can be divided in several sections (uses) whose lifetime spans from the previous use to
Priority
Regs. Allocation
Add Spill
Select Candidates
Sort Candidates
Generation of Spill Code
Scheduling
Rgs. Reqrm.
Scheduling
Figure
1: Flow diagram for the original spill algorithm.
the current one. For example, Figure 2.a shows a producer operation followed by four
independent consumer operations. In this case, the lifetime of the variable ranges from the
beginning of Prod to the beginning of Cons4; four different uses can be defined (U1 . U4),
as shown in the right part of the same figure.
The disadvantage of spill of variables is that, if one variable has several successors, the
number of associated spill memory operations is suddenly increased; this may produce an
increase of the II and thus reduce performance. In addition, some of the loads added might
not actually contribute to a decrease of the register requirements. Spill of uses allow a
more fine-grain control of the spill process. Both alternatives have been used in previous
proposals: spill of variables is used in [18, 26] and spill of uses in [27]. In this paper we
evaluate the performance of the two alternatives and their combination with the heuristics
proposed in Section 4.
3.2 Sorting spilling candidates
Some criteria are needed to decide the most suitable spilling candidates (i.e. those decreasing
the most the register requirements with the smallest cost). This is achieved by assigning a
priority to each spilling candidate; this priority is usually computed according to the LT of
the candidate [18] or to some ratio between its LT and the memory traffic introduced when
spilled [18, 26, 27]. As expected, the second heuristic always produces better results. In this
paper we propose a new criterion that takes into account the criticality of the cycles spanned
by the lifetime of each spilling candidate.
3.3 Quantity selection
After giving priorities to each spilling candidate, the algorithm decides how many candidates
are actually spilled to memory. The objective is to decrease the register requirements so that
UR - AR with the minimum number of spill operations. This requires an estimation of the
benefits that each candidate will produce in the final schedule. However, the new memory
operations may saturate the memory units and lead to an increase of the II; this increase in
the II reduces by itself the register pressure and may lead to a situation where an unnecessary
number of candidates have been selected.
This selection process can be done in different ways. For instance, [18] proposes to
spill one candidate at a time and reschedule again. This heuristic avoids overspilling at
the expense of an unacceptable scheduling time. To avoid it, [26] performs several tries by
spilling a power-of-two number of candidates; the process finishes when a new schedule that
fits in AR is found. To reduce the number of reschedulings in a more effective way, [18]
selects as much candidates as necessary to directly reduce the UR to AR; in order to avoid
overspilling, each time a candidate is selected, its lifetime is subtracted from the current
UR to compute an estimated number of registers needed after spilling. Another alternative,
which is used in [27] to generate schedules with minimum register requirements, consists on
selecting as much candidates as necessary to saturate the memory units with the current II.
This paper proposes a new heuristic that tries to better foresee the overestimation that
is produced by some of the previous heuristics.
3.4 Adding memory accesses
Once the set of candidates has been selected, the dependence graph is modified, in order
to introduce the necessary load/store instructions, and rescheduled. In order to guarantee
that the spill effectively decreases UR, the spilled operations have to be scheduled as close
as possible to their producers/consumers. This is accomplished by scheduling each spill
operation and its associated producer/consumer as a single complex operation [18].
For the spill of variables, a store operation has to be inserted after the producer and
a load operation inserted before each consumer. Figure 2.b shows the modification of the
dependence graph when spilling the variable in Figure 2.a. For the spill of uses, a store
lat. FU
Cons.1
Cons.2
Cons.3
Cons.4
Store
Prod.
Cons.1
Cons.2
Cons.3
Cons.4
Store
Prod.
Prod.
a) b) c)
Cons.1
Cons.2
Cons.3
U3
Cons.4
Figure
2: a) Original graph. b) Graph after spilling a variable. c) Graph after spilling a single use
of the same variable.
operation has to be added after the producer and a single load operation added before the
consumer that ends the corresponding use. Figure 2.c shows the modification of the graph
when use U3 is selected (the one that has the largest lifetime and therefore releases more
registers).
4 New heuristics for spill code
In this section we describe the issues and gauges that are used to control the generation
of valid schedules and spill code. The first control decides the priority of candidates to be
selected for spill. The idea behind the proposal is to give priority to those ones that contribute
to a reduction of the register pressure in the most effective way. The second control decides
how many candidates should be spilled before rescheduling the loop. Finally the third control
decides when it is worth to apply a direct increase of the II with no additional spill. In this
analysis, we also consider spilling candidates to be either variables or uses.
4.1 Spill of variables and spill of uses
The algorithms described in Section 3 decide the candidates to spill based on their lifetime
or some ratio between their lifetime and the memory traffic that their spill would generate.
Configuration P4M2L4
Registers
Use 1932 4454 250 875
UseCC
UseQF 1626 3791 229 787
UseTF 1408 3139 132 593
UseCCQFTF 1232 2895 126 606
Table
3: Improving performance metrics by applying different heuristics.
Some of them make a difference when considering either spill of variables or uses of variables.
For instance, Table 3 shows the \SigmaII and \Sigmatrf for two register file sizes (32 and 64 registers)
and processor configuration P4M2L4, when either spill of variables (row labeled Var) or spill
of uses (row labeled Use) is applied (using the LT=trf criterion to order spilling candidates).
The table reports figures relative to the ideal case, i.e. the \SigmaII and \Sigmatrf for the ideal case
has been subtracted from the values for the specific configuration. Notice that in general,
doing spill of uses achieves schedules with lower II and memory traffic.
From these initial results, the reader may conclude that doing spill of uses is more effective
than doing spill of variables. However, we will see along the paper that our proposals improve
the metrics and tend to reduce the gap between these two alternatives. In addition, their
behavior also depends on the architecture being evaluated, as will be shown in Section 5.
4.2 Critical cycle
First of all we propose a new criterion to select candidates for spill. Sometimes the selection
based on LT=trf may select candidates that do not effectively reduce the register pressure.
The rationale behind is that their spill reduces the number of simultaneous live candidates
but not in the scheduling cycle where this number is maximum (thus deciding the number
of registers needed).
The Critical Cycle (CC) is defined as the scheduling cycle for which the number of used
registers UR is maximum. The new selection criterion gives more priority to those candidates
that cross the CC. This criterion for candidates selection may improve the efficiency of the
spill process, as shown in Table 3 for the processor configuration selected. Rows labeled
VarCC and UseCC show the two performance metrics for our workload.
4.3 Number of variables to spill
The computation of the number of candidates may not be accurate because the new spill
code might increase the II of the schedule (as a result of the saturation of the memory unit);
this increase of the II could reduce the overall register pressure and therefore it would not
be necessary to use as much spill as initially expected. The proposal in this section tries to
foresee this overestimation.
The algorithm assumes that the register file has more available registers (AR 0
actually has. It adds to the actual number of available registers a number proportional to
the gap between the UR and AR, as follows: AR 0
being QF the
Quantity Factor. Notice that corresponds to the spill of one candidate at a time
and to the spill of all the necessary candidates to reduce the UR to AR.
QF is a parameter whose optimal value depends on the architecture and the characteristics
of the loop itself. In this paper we conduct an experimental evaluation of this parameter
in order to determine a range of useful values and to analyze its effects on performance and
on the scheduling time SchedTime. Figure 3 plots the behavior for \SigmaII , \Sigmatrf and SchedTime
for QF values in the range between 1 and 0. The lowest values of QF leads to worst results
in terms of II and trf but with the low SchedTime. Large values for QF lead to better
performance at the expense of an increase in compilation time. In particular, for values of
QF larger than 0.6, the increase in SchedTime does not compensate the increase in perfor-
mance. In general medium values of QF generate good schedules with a negligible increase
in compilation time.
Table
3 (rows labeled VarQF and UseQF ) shows the results for 0.5. Notice that
QF (Quantity Factor)18002200Sum(II)
a.1
QF (Quantity Factor)40005000
b.1
Use
QF (Quantity Factor)5001500
SchedTime
QF (Quantity Factor)250Sum(II)
a.2
QF (Quantity Factor)8001000
b.2
QF (Quantity Factor)100300500
SchedTime
c.2
Figure
3: Behavior of: a) \SigmaII , b) \Sigmatrf and c) total SchedTime for values of QF between 0 and 1
(32 registers (.1) and 64 registers (.2)).
this value does not increase too much the compilation time and reduces considerably both
the II and trf.
4.4 Traffic control
The previous techniques try to improve the performance of the spill process by increasing
the effectiveness of the selection of candidates. There are situations in which it is better to
increase II instead of applying spill. For example, when AR - UR and adding spill code
would lead to a saturation of the memory unit; in this case, the II and memory traffic would
be increased (in order to fit the new memory operations). However, if we only increase the
II (without adding spill), the memory traffic will not increase and we might also reduce UR.
Figure
4 shows the algorithm proposed with a control point that decides when it is better
to increase II or to insert spill code.
In order to foresee the previous situation, the algorithm performs an estimation of the
memory traffic (number of loads and stores) that would be introduced if spill is done
(NewTrf). If the maximum traffic MaxTrf that can be supported with the current value
of II is not enough to absorb NewTrf, then the algorithm directly increases the II (without
inserting spill) and the process is repeated again. In particular, the new II value might
produce less spill code or it may not be required at all.
Priority
Regs. Allocation
Add Spill
Select Candidates
Sort Candidates
Trf. can be absorbed
II ++
Generation of Spill Code
Scheduling
Rgs. Reqrm.
Scheduling
Figure
4: Flow diagram for the proposed algorithm that combines spill code and traffic control.
The maximum traffic the architecture can support is multiplied by TF (Traffic Factor)
to control the saturation of the memory unit (the condition that accepts the addition of
spill code is NewTrf - MaxTrf TF). This is done because there is a trade-off between
applying the spilling mechanism and increasing the II. When the TF parameter is included
we obtain a better trade-off between both mechanisms. Moreover, if we take
we are always increasing II , and if we take TF !1 then we are always inserting spill.
The TF can take any positive value. But after some experimentation, we have observed
that the best results are obtained when TF ranges between 0.7 and 1.4. Figure 5 plots the
behavior of \SigmaII , \Sigmatrf and SchedTime for values of TF within this range. In general, it can
be observed that the best value of II is obtained with a TF value close to 0.95, but if we
want to reduce the traffic we have to use smaller values for TF. Notice that the time to
obtain the schedules has a small variation.
Table
3 (rows labeled VarTF and UseTF ) shows the performance in terms of \SigmaII and
\Sigmatrf when the TF is set to 0.95. Notice that for registers, spill of variables performs
better than spill of uses.
Both parameters, QF and TF, tend to reduce the spill code but may interfere in a positive
way.
Figure
6 plots the combined effect of both parameters. Notice that a tuning of these
parameters might lead to better values of \SigmaII . Another observation is that if the TF is not
used, then higher values of QF are needed (which results in higher scheduling times). In
particular, for 32 registers the best results are obtained when QF is set to zero while for 64
registers the best results are obtained when QF is set between 0.3 and 0.5.
In order to summarize all the previous effects, rows labeled VarCCQFTF and UseCC-
QFTF in Table 3 show the performance when CC, QF and TF are used (QF and TF are
set to the value that produce the best performance results). When spill of variables is used,
the \SigmaII is reduced by 47% and 52% and that the \Sigmatrf is reduced by 42% and 35% (with
respect to the initial Var for 32 and 64 registers, respectively). Similarly, when spill of uses
is applied, \SigmaII is reduced by 36% and 50% and that \Sigmatrf is reduced by 35% and 30% (with
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)160020002400
a.1
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)300040005000
b.1
Use
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)50150SchedTime
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)200300
a.2
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)600800Sum(trf)
b.2
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)2060
SchedTime
c.2
Figure
5: Behavior of: a) \SigmaII , b) \Sigmatrf and c) SchedTime when TF ranges between 0.7 and 1.4
(32 registers (.1) and 64 registers (.2)). The first point (*) corresponds to TF !1.
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)160020002400
a.1
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)35004500Sum(trf)
b.1
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)100300
SchedTime
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)200300
a.2
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)600800Sum(trf)
b.2
* 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7
TF (Traffic Factor)2060
SchedTime
c.2
Figure
Behavior of: a) \SigmaII , b) \Sigmatrf and c) SchedTime for several values of QF when TF
ranges between 0.7 and 1.4 (32 registers (.1) and 64 registers (.2)). The first point (*) is given with
respect to the initial Use for 32 and 64 registers, respectively). This increase in performance
is at the expenses of an affordable increase in scheduling time: for 32 registers, the scheduler
requires 1.8 times the original time; for 64 registers, the increase is negligible.
Performance Evaluation
The effectiveness of the proposed mechanisms has been evaluated using static information:
II and trf . This evaluation has demonstrated that the new heuristics are very effective in
obtaining better schedules. However, a static evaluation does not show how useful they are
in terms of execution time and dynamic memory traffic.
The execution time is estimated as II (N being N the total number
of iterations and E the number of times the loop is executed. The dynamic memory traffic
is estimated as M (N being M the number of memory operations in the
kernel code of the software pipelined loop.
The results, obtained for the P4M2L4 configuration, are shown in Figure 7. The bar
graphs at the upper part (a.) show the execution time degradation relative to the ideal case
(i.e. assuming an infinite number of registers). The closer the results are to 1, the better is
the performance. Notice that 1 is the upper bound for performance. The lower part of the
same figure (b.) shows the memory traffic Mem relative to the ideal case Ideal Mem. Again,
the closer the traffic to 1 the better the schedules. However in this case 1 is a lower bound
for memory traffic. The plots at the left side (.1) correspond to all loops in the benchmark
set while the plots at the right side (.2) refer only to the loops that require spill code.
For a configuration with registers, Figure 7.a.1 shows an speed-up of 1.38 with respect
to the original proposal when spill of variables is used, and 1.27 when spill of uses is applied.
For the same configuration, Figure 7.b.1 shows a reduction of memory by a factor close to
0.7 in both cases. For 64 registers, the speed-up reported is less important (close to 1.06)
and the memory traffic is reduced by a factor close to 0.9.
Figures
7.a.2 and 7.b.2 show the performance for the subset of loops that require spill.
For a configuration with registers, performance for these loops increases by a factor of
1.70 when spill of variables is applied and 1.52 when spill of uses is applied. For 64 registers,
performance increases by a factor close to 1.26 in both cases. Notice that the memory traffic
registers 64 registers0.20.61.0
Cycles
a.1
Use
Use+CC
Use+CC+QF
Use+CC+QF+TF
registers 64 registers13
Mem
b.1
registers 64 registers0.20.61.0
a.2
registers 64 registers2610
Men/Ideal_Mem
b.2
Figure
7: Dynamic results for different spill heuristics. Configuration P4M2L4, QF=0.3 and
TF=0.95.
is extraordinarily decreased. When 32 registers are available, the memory traffic is reduced
by factors of 0.57 and 0.62 with respect to the original proposals with spill of variables and
uses, respectively. When 64 registers are available, memory traffic is reduced by factors of
0.72 and 0.77.
For this architecture notice that spill of uses performs better than spill of variables for
any combination of heuristics and for both 32 and 64 registers. When the critical cycle is
considered, spill of uses improves better than spill of variables. However, when the quantity
factor and the traffic factor are used, performance tends to level between spill of variables
and spill of uses (spill of uses still performs slightly better).
The results that are obtained for the other processor configurations are shown in Figure
8. First of all, notice that for all configurations the heuristics proposed in this paper perform
better. However there are some aspects that need further discussion. For example, in some
cases (e.g. configuration P2M2L4), spill of uses performs worse than spill of variables; other
configurations (e.g P4M4L4) perform better when spill of uses is applied and 64 registers are
available while spill of variables performs better with registers. Also, contrary to what
happens for all other configurations, P2M2L4 with 64 registers and with spill of uses has a
big performance degradation when the critical cycle is considered.
Finally, the parameters QF and TF have been set to different values for each config-
uration. These parameters give flexibility to the algorithm, and allow it to adapt to the
registers 64 registers0.20.61.0
Cycles
a.1
registers 64 registers2610
Mem
b.1
registers 64 registers0.20.61.0
Cycles
a.2
Use
Use+CC
Use+CC+QF
Use+CC+QF+TF
registers 64 registers2610
Mem
b.2
registers 64 registers0.20.61.0
a.3
registers 64 registers2610
Mem
b.3
Figure
8: Dynamic results for different spill heuristics. Configurations: (.1) P2M2L4, (.2) P2M2L6
and (.3) P4M4L4.
configuration. However, these parameters should be tuned for each configuration in order to
obtain good results. For instance we used
with registers and spill of uses, while the same architecture with spill of variables obtains
the best performance with We have performed extensive evaluations
to empirically obtain a useful range for these values so that reasonably good results
are obtained. In particular QF should range from 0.0 to 0.3 and TF should range from 0.9
to 1.0.
In addition these values can be tuned for specific applications or even for specific loops
if final performance is much more important than compilation time like in embedded applications
6 Conclusions
In this paper we have presented a set of heuristics that improve the efficiency of the process
that reduces the register pressure of software pipelined loops. The paper proposes some
new criteria to decide between two different alternatives that contribute to this reduction:
decrease the execution rate of the loop (increase its II) or temporarily store registers into
memory (through spill code). For the second alternative, the paper also contributes with new
criteria to select the spilling candidates (both how many and which ones). The proposals have
been evaluated using a register-conscious software pipeliner; however they are orthogonal to
it and could be applied to any algorithm.
The experimental evaluation has been done over a large collection of loops from the
Perfect Club benchmark. The impact of the different heuristics is evaluated in terms of
effectiveness and efficiency. In terms of effectiveness, the heuristics proposed reduce in most
of the cases the execution rate and memory traffic with respect to the original proposals.
In terms of efficiency, these reduction contributes to a real increase in performance. In
particular, the dynamic performance for the loops that do not fill in the available registers
increases by a factor that ranges between 1.25 and 1.68. The memory traffic is also reduced
by a factor that ranges between of 0.77 and 0.57. This reduction in the execution time
and memory traffic is achieved at the expenses of a reasonable increase in the compilation
time. In the worst case, the scheduler requires 1.8 times the original time. For the whole
workbench, the dynamic performance increases by a factor that ranges between 1.07 and
1.38 while the memory traffic is reduced by a factor that ranges between 0.9 and 0.7. The
scheduler manages to compile all these loops in less than one minute (for a configuration
with 64 registers) and less than 3.5 minutes (for a configuration with
Although the heuristics proposed contribute to better register-constrained schedules,
some additional work is needed to tune several parameters (like the traffic and quantity
factors) and to analyze their real effect for different architectural configurations. We have
also shown that, depending on the configuration, spilling candidates are either variables or
uses. This suggests that a more dynamic process, in which the scheduler decides on-the-fly
the specific values for some of these parameters and takes into account both variables and
uses, may lead to better schedules.
--R
A realistic resource-constrained software pipelining algorithm
Software pipelining.
Spill code minimization techniques for optimizing compilers.
The Perfect Club benchmarks: Effective performance evaluation of supercomputers.
Coloring heuristics for register allocation.
Improvements to graph coloring register allocation.
Register allocation via hierarchical graph coloring.
Register allocation and spilling via graph coloring.
An approach to scientific array processing: The architectural design of the AP120B/FPS-164 family
Overlapped loop support in the Cydra 5.
Compiling for the Cydra 5.
Stage scheduling: A technique to reduce the register requirements of a modulo schedule.
The meeting a new model for loop cyclic register allocation.
Register allocation using cyclic interval graphs: A new approach to an old problem.
Circular scheduling: A new technique to perform software pipelining.
A Systolic Array Optimizing Compiler.
Heuristics for register-constrained software pipelining
Quantitative evaluation of register pressure on software pipelined loops.
Hypernode reduction modulo scheduling.
Register requirements of pipelined pro- cessors
Software pipelining in PA-RISC compilers
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing.
Register allocation for software pipelined loops.
Iterative modulo scheduling: An algorithm for software pipelining loops.
Software pipelining showdown: Optimal vs. heuristic methods in a production compiler.
Software pipelining with register allocation and spilling.
--TR
Overlapped loop support in the Cydra 5
Spill code minimization techniques for optimizing compliers
Coloring heuristics for register allocation
Register allocation via hierarchical graph coloring
Circular scheduling
Register allocation for software pipelined loops
Register requirements of pipelined processors
Lifetime-sensitive modulo scheduling
Compiling for the Cydra 5
Improvements to graph coloring register allocation
Iterative modulo scheduling
Software pipelining with register allocation and spilling
Software pipelining
Stage scheduling
Hypernode reduction modulo scheduling
Software pipelining showdown
Heuristics for register-constrained software pipelining
Quantitative Evaluation of Register Pressure on Software Pipelined Loops
A Systolic Array Optimizing Compiler
Conversion of control dependence to data dependence
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing
Register allocation MYAMPERSANDamp; spilling via graph coloring
--CTR
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Software and hardware techniques to optimize register file utilization in VLIW architectures, International Journal of Parallel Programming, v.32 n.6, p.447-474, December 2004
Alex Alet , Josep M. Codina , Antonio Gonzlez , David Kaeli, Demystifying on-the-fly spill code, ACM SIGPLAN Notices, v.40 n.6, June 2005
Xiaotong Zhuang , Santosh Pande, Differential register allocation, ACM SIGPLAN Notices, v.40 n.6, June 2005
Xiaotong Zhuang , Santosh Pande, Allocating architected registers through differential encoding, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.2, p.9-es, April 2007
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Two-level hierarchical register file organization for VLIW processors, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.137-146, December 2000, Monterey, California, United States
Josep M. Codina , Josep Llosa , Antonio Gonzlez, A comparative study of modulo scheduling techniques, Proceedings of the 16th international conference on Supercomputing, June 22-26, 2002, New York, New York, USA
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Modulo scheduling with integrated register spilling for clustered VLIW architectures, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Bruno Dufour , Karel Driesen , Laurie Hendren , Clark Verbrugge, Dynamic metrics for java, ACM SIGPLAN Notices, v.38 n.11, November
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Register Constrained Modulo Scheduling, IEEE Transactions on Parallel and Distributed Systems, v.15 n.5, p.417-430, May 2004 | software pipelining;spill code;instruction-level parallelism;register allocation |
349336 | A generational on-the-fly garbage collector for Java. | An on-the-fly garbage collector does not stop the program threads to perform the collection. Instead, the collector executes in a separate thread (or process) in parallel to the program. On-the-fly collectors are useful for multi-threaded applications running on multiprocessor servers, where it is important to fully utilize all processors and provide even response time, especially for systems for which stopping the threads is a costly operation. In this work, we report on the incorporation of generations into an on-the-fly garbage collector. The incorporation is non-trivial since an on-the-fly collector avoids explicit synchronization with the program threads. To the best of our knowledge, such an incorporation has not been tried before. We have implemented the collector for a prototype Java Virtual Machine on AIX, and measured its performance on a 4-way multiprocessor. As for other generational collectors, an on-the-fly generational collector has the potential for reducing the overall running time and working set of an application by concentrating collection efforts on the young objects. However, in contrast to other generational collectors, on-the-fly collectors do not move the objects; thus, there is no segregation between the old and the young objects. Furthermore, on-the-fly collectors do not stop the threads, so there is no extra benefit for the short pauses obtained by generational collection. Nevertheless, comparing our on-the-fly collector with and without generations, it turns out that the generational collector performs better for most applications. The best reduction in overall running time for the benchmarks we measured was 25%. However, there were some benchmarks for which it had no effect and one for which the overall running time increased by 4%. | Introduction
Garbage collectors free the space held by unreachable (dead) objects so that this space can be reused in
future allocations. On multiprocessor platforms, it is not desirable to stop the program and perform the
collection in a single thread on one processor, as this leads both to long pause times and poor processor
utilization. Several ways to deal with this problem exist, the two most obvious ways are:
1. Concurrent collectors: Running the collector concurrently with the mutators. The collector runs in one
thread on one processor while the program threads keep running concurrently on the other processors.
The program threads may be stopped for a short time to initiate and/or nish the collection.
2. Parallel collectors: Stopping all program threads completely, and then running the collector in parallel
in several collector threads. This way, all processors can be utilized by the collector threads.
IBM Haifa Research Lab. E-mail: tamar@il.ibm.com.
y IBM Haifa Research Lab. E-mail: kolodner@il.ibm.com.
z Computer Science Dept., Technion - Israel Institue of Technology. This work was done while the author was at the IBM
Haifa Research Lab. E-mail: erez@cs.technion.ac.il.
In this paper we discuss a concurrent collector; in particular, an on-the-
y collector that does not stop the
program threads at all.
The study of on-the-
y garbage collectors was initiated by Steele and Dijkstra, et al. [27, 28, 8] and
continued in a series of papers [9, 14, 3, 4, 20, 21] culminating in the Doligez-Leroy-Gonthier (DLG) collector
[11, 10]. The advantage of an on-the-
y collector over a parallel collector and other types of concurrent
collectors [1, 13, 24], is that it avoids the operation of stopping all the program threads. Such an operation
can be costly. Usually, program threads cannot be stopped at any point; thus, there is a non-negligible wait
until the last (of many) threads reaches a safe point where it may stop. The drawback of on-the-
y collectors
is that they require a write barrier and some handshakes between the collector and mutator threads
during the collection. Also, they typically employ ne-grained synchronization, thus, leading to error-prone
algorithms.
Generational garbage collection was introduced by Lieberman and Hewitt [23], and the rst published
implementation was by Ungar [29]. Generational garbage collectors rely on the assumption that many
objects die young. The heap is partitioned into two parts: the young generation and the old generation.
New objects are allocated in the young generation, which is collected frequently. Young objects that survive
several collections are \promoted" to the older generation. If the generational assumption (i.e., that most
objects die young) is indeed correct, we get several advantages:
1. Pauses for the collection of the young generation are short.
2. Collections are more e-cient since they concentrate on the young part of the heap where we expect to
nd a high percentage of garbage.
3. The working set size is smaller both for the program, because it repeatedly reuses the young area, and
the collector, because it traces over a smaller portion of the heap.
1.1 This work
In this paper we present a design for incorporating generations into an on-the-
y garbage collector. Two
issues immediately arise. First, shortening the pause times is not relevant for an on-the-
y collector since
it does not stop the program threads. Second, traditional generational collectors partition the heap into
the generations in a physical sense. Namely, to promote an object from the young generation to the old
generation, the object is moved from the young part of the heap to the old part of the heap. On-the-
garbage collectors do not move objects; the cost of moving objects while running concurrently with the
program threads is too high. Thus, we have to do without it.
Demers, et al. [6] presented a generational collector that does not move objects. Their motivation was to
adapt generations for conservative garbage collection. Here, we build on their work to design a generational
collector for the DLG on-the-
y garbage collector [11, 10].
We have implemented this generational collector for our JDK 1.1.6 prototype on AIX, and compared its
performance with our implementation of the DLG on-the-
y collector. Our results show that the generational
y collector performs well for most applications, but not for all. For the benchmarks we ran on a
multiprocessor, the best reduction in overall program runtime was 25%. However, there was one benchmark
for which generational collection increased the overall running time by 4%.
Several properties of the application dictate whether generational collection may be benecial for overall
performance. First, the generational hypothesis must hold, i.e., that many objects indeed die young. Second,
it is important that the application does not modify too many pointers in the old generation. Otherwise,
the cost of handling inter-generational pointers is too high. And last, the lifetime distribution of the objects
should not fool the partitioning into generations. If most tenured objects in the old generation are actually
dead, no matter what the promotion policy is, then we will not get increased e-ciency during partial
collections. If collecting the old generation frees the same fraction of the objects as collecting the young
generation, then we may as well collect the whole heap since we do not care about pause times. Furthermore,
the overhead paid for maintaining inter-generational pointers will cause an increase in the overall running
time of the application.
We used benchmarks from the SPECjvm benchmarks [25] plus two other benchmarks as described in
Section 8.2. Benchmarks for which overall application performance improves with generational collection
are Anagram (25% improvement), 213 javac (15% improvement) and 227 mtrt (10% improvement). The
improvement for Multithreaded RayTracer ranges between 1%-16%, depending on the number of application
threads running concurrently. The application that does not do well is 202 jess, for which there is a 4%
increase in the overall running time. The two reasons for this deterioration are that lots of objects in the old
generation have to be scanned for inter-generational pointers and that most of the objects that get tenured
die (become unreachable) in the following full collection.
1.2 Card marking
Hosking, Moss and Stefanovic [16] provide a study of write barriers for generational collection. Among other
parameters, they investigate the in
uence of the card size in a card marking barrier on the overall e-ciency.
For most of the applications they measured, the best sizes for the cards were 256 or 512 bytes, and the worst
sizes were the extremes, 16 or 4096 bytes.
Note that the advantage of small cards is that the indication of where pointers have been modied is
more exact, and the collector does not need to scan a big area to nd the inter-generational pointers that it
needs on the card. However, small cards require more space for the dirty marks, which reduces locality.
In the process of choosing the parameters for our collector, we have run similar measurements with
various card sizes. As it turns out, the behavior of an on-the-
y generational collector is dierent. The best
choice for the card sizes is at one of the extremes, depending on the benchmark. We chose to set the card
size to the minimum possible. This was the best for most benchmarks and not far from best for the rest. We
suspect that the primary reason that our results dier from those of Hosking, et al. [16] is that our collector
does not move objects. We provide the details in Section 8.5.3.
1.3 Techniques used and organization
We start with the state of the art DLG on-the-
y collector [11, 10], which we brie
y review in Section 2. We
then construct our generational collector similar to the work of Demers, et al. [6], presenting it in Section
3. We augment DLG to work better with generations, both by utilizing an additional \color" in Section 4
and also by using a color-toggle trick to reduce synchronization in Section 5. A similar trick was previously
used in [21, 17, 7, 22, 19]. Our rst promotion policy is trivial: promote after an object survives a single
collection. We also study options to promote objects after several collections in Section 6 below. In Section
7 we provide the code of the collector and lower level details appropriate for an implementer. In Section 8 we
report the experimental results we measured and justify our choice of parameters. We conclude in Section
9.
2 The collector
We build on the DLG collector [11, 10]. This is an on-the-
y collector that does not stop the program to do
the collection. There are two important properties of this collector that make it e-cient. First, it employs
ne-grained atomicity. Namely, each instruction can be carried out without extra synchronization. Second,
it does not require a write-barrier on operations using a stack or registers. The write barrier is required only
on modications of references inside objects in the heap.
The original papers also suggest using thread local heaps, but the design assumes an abundant use of
immutable objects as in ML. We did not use thread local heaps.
We start with a short overview of the DLG collector. For a more thorough description and a correctness
proof the reader is referred to the original papers [11, 10]. The collector is a mark and sweep collector that
employs the standard three color marking method. All objects are white at the beginning of the trace, the
root objects are then marked gray, and the trace then continues by choosing one gray object, marking it
black, and marking all its white sons gray. This process continues until there are no more gray objects in the
heap. The meaning of the colors is: a black object is an object that has been traced, and whose immediate
descendants have been traced as well. A gray object is an object that has been traced, but whose sons have
not yet been checked. A white object is an object that has not yet been traced. Objects that remain white
at the end of the trace are not reachable by the program and are reclaimed by the sweep procedure. Shaded
(gray or black) objects are recolored white by sweep. A fourth color, blue, is used to identify
To deal with the fact that the collector is on-the-
y, i.e., it traces the graph of live objects while objects are
being modied by the program, some adjustments to the standard mark and sweep algorithm are required.
The collector starts the collection with three handshakes with the mutator threads. On a handshake, the
collector changes its status, and each mutator thread cooperates (i.e., indicates that it has seen the change)
independently. After responding to the rst handshake, the write barrier becomes active and the mutators
begin graying objects during pointer updates. The second handshake is required for correctness; the behavior
of the mutators does not change as a result. While responding to the third handshake, each mutator marks
its roots gray, i.e., the objects referenced from its stack The mutators check whether they need to respond
to handshakes regularly during their normal operation. They never respond to a handshake in the middle
of an update or the creation of an object. The collector considers a handshake complete after all mutators
have responded. After completing the three handshakes, the collector completes the trace of the heap and
then sweeps it.
The mutators gray objects when modifying an object slot containing a pointer until the collector completes
its trace of the live objects. The amount of graying depends on the part of the collection cycle. Suppose a
reference to an object A is modied to point to another object B. Between the rst and the third handshake,
the mutator marks both A and B gray. After the third handshake and until the end of the sweep, the mutator
marks only A as gray.
The mutators also cooperate with the collector when creating an object. During the trace, objects are
created black, whereas they are created white if the collector is idle. During sweep, objects are created
black if the sweep pointer has not seen them yet (so that they will not be reclaimed). If the sweep pointer
has passed them, they are created white so as to be ready for the next collection. If the sweep pointer is
directly on the creation spot, the object is created gray. Some extra care must be taken here for possible
races between the create and the sweep. However, a simple method of color-toggle allows avoiding all these
considerations. We discuss it in Section 5 below.
Generational collection without moving objects
We describe an approach to generational collection that does not relocate objects. We call a collection of
the young generation a partial collection and a collection of the entire heap a full collection.
Our design is similar to the Demers, et al. [6] design for a stop-the-world conservative collector. How-
ever, we incorporate features necessary to support on-the-
y collection: clearing the card marks without
stopping the threads, an additional color for objects created during a collection and a color toggle to avoid
synchronization between object allocation and sweep.
Instead of partitioning the heap physically and keeping the young generation in a separate place, we
partition the heap logically. For each object, we keep an indication of whether it is old or young. This may
be a one bit indication or several bits giving more information about its age.
The simplest version is the one that promotes objects after surviving one collection. We begin by
describing this simpler algorithm. We discuss an aging mechanism in Section 6 below. Demers [6] notes
that if an object becomes old after surviving one collection, then the black color may be used to indicate
that an object is old. Clearly, before the sweep, all objects that survived the last collection are black. If we
do not turn these objects white during the sweep, then we can interpret black objects as being in the old
generation.
During the time between one collection and the next, all objects are created white and therefore considered
young. At the next partial collection (i.e., collection of the young generation) everything falls quite nicely
into place. During the trace, we do not want to trace the old generation, and indeed, we do not trace black
objects. During the sweep, we do not want to reclaim old objects, and indeed, we do not reclaim black
objects. All live objects become black, thus, also becoming old for the next collection.
Before a full collection (a collection of the old and young generation), we turn the color of all objects
white. Other than that, full collections are similar to partial collections.
3.1 Inter-generational pointers
It remains to discuss inter-generational pointers, pointers in old objects that point to young objects. Since
we do not want to trace the old generation during the collection of the young generation, we must assume
that the old objects are alive and treat the inter-generational pointers as roots.
How do we maintain a list of inter-generational pointers? Similarly to other generational collectors, we
may choose between card marking [26] and remembered sets [23, 29]. (See [18] for an overview on generational
collection and the two methods for maintaining inter-generational pointers.) In our implementation, we only
used card marking. The reason is that in Java we expect many pointer updates, and the cost of an update
must be minimal. Also, we did not have an extra bit available in the object headers required for an e-cient
implementation of remembered sets.
In a card marking scheme, the heap is partitioned into cards. Initially, the cards are marked \not dirty".
A program thread (mutator) marks a card dirty whenever it modies a card slot containing a pointer. The
collector scans the objects on the dirty cards for pointers into the young generation; it may turn
card mark if it does not nd any such pointers on the card. Card marking maintains the invariant that
inter-generational pointers may exist only on dirty cards.
The size of the cards determines a tradeo between space and time usage. Bigger cards imply less space
required to keep all dirty marks, but more time required by the collector to scan each dirty card to nd the
inter-generational pointers. We tried all powers of 2 between 16 and 4096 and found that the two extremes
provided the best performance (see Section 8.5.3).
3.2 The collector
A partial collection begins by marking gray all young objects referenced by inter-generational pointers; in
particular, the collector marks gray all white objects referenced by pointers on dirty cards. At the same
time, all card marks are cleared. Clearing the marks is okay since all surviving objects are promoted to the
old generation at the completion of the collection, so that all existing inter-generational pointers become
intra-generational pointers. For a more advanced aging mechanism (as in Section 6) we would have to check
to determine whether a card mark could be cleared.
After handling inter-generational pointers, all mutators are \told" to mark their roots using the handshake
mechanism. This is followed by trace, which remains unchanged from the non-generational collector, and
then sweep. Sweep is modied so that it does not change the color of black objects back to white.
A full collection begins by clearing card marks, without tracing from the dirty cards. The collector also
recolors all black objects to white, allowing any unreachable object to be reclaimed in a full collection. After
that, the mutators are \told" to mark their roots and the collector continues with trace and sweep as above.
3.3 Triggering
We use a simple triggering mechanism to trigger a partial collection. A parameter representing the size of
the young generation is determined for each run, and a partial collection is triggered after allocating objects
with accumulating size exceeding the predetermined size 1 . To trigger a full collection, we use the standard
method of starting the concurrent collection when the heap is \almost" full.
our heap manager, we cannot trigger exactly at this time. Thus, the predetermined bound serves as a lower bound
to the trigger time.
4 Dealing with premature promotion
When promoting all objects that survive a collection, there are infant objects created just before the start of
the collection, which are immediately made old. These objects may die young, but they have already been
promoted to the old generation, and we will not collect them until the next full collection. In an on-the-
collection, objects are also created during the collection cycle; thus, compounding this promotion problem.
We have added a simple mechanism to avoid promoting objects created during the collection to the old
generation. A more advanced mechanism that keeps an age for each object is described in Section 6 below.
This is done by introducing a new color for objects created during a collection cycle. Instead of creating
objects white or black depending on the stage of the collection as in the DLG algorithm, we create objects
yellow during the collection. Yellow objects are not traced by the collector, and the sweep turns yellow
objects back to white (without reclaiming them). Thus, the collector does not promote them to the old
generation. One subtle point, which we discuss in the more technical section (see Section 7 below), forces an
exception to the rule. In particular, between the rst and the third handshakes of the collector, the mutators
also mark yellow objects gray.
5 Using a color-toggle
Recall that during the collection, mutators allocate all objects yellow. Trace changes the color of all reachable
white objects to black. In the design described so far, sweep reclaims white objects and colors them blue
(the color of non-allocated chunks), and changes the color of yellow objects to white. Thus, at the end of
the sweep, there are no remaining white objects.
Instead of recoloring the yellow objects, sweep can employ a color toggle mechanism similar to previous
work [21, 17, 7, 2, 22, 19]. The color toggle mechanism exchanges the meaning of white and yellow, without
actually changing the color indicators associated with the objects. Thus, live objects remain either black
or yellow, and mutators go on coloring new objects yellow, so that yellow plays the role of white from the
previous collection cycle. When a new collection begins, the mutators begin coloring new objects white, so
that white begins playing the role of the yellow color from the previous cycle.
To implement the color toggle, we use two color names: the allocation color and the clear color. Initially,
the allocation color is white, and the clear color is yellow. At all times, objects are allocated using the
allocation color. At the beginning of the collection cycle, the values of the allocation color and the clear
color are exchanged. In the rst cycle this means that the allocation color becomes yellow and the clear
color becomes white. During the trace, all reachable objects that have clear color are turned gray. Objects
that have the allocation color are not traced and their color does not change. During the sweep, all objects
with clear color are reclaimed.
Using this toggle we do not need to turn yellow objects into white during the sweep, but more important,
we avoid the race between the create and the sweep. We do not need to know where the sweep pointer is
in order to determine the color of a new object. A newly allocated object is always assigned the current
allocation color.
Remark 5.1 Our discussion here is adequate for the generational collector, but one may easily modify
the original collector to run with the same improvement by toggling the black and the white colors. In
the comparison between a collector with and without generations, we feel that it is not fair to let only the
generational collector enjoy this improvement. Therefore, we have also added this modication to the collector
that does not use generations. Thus, the comparison we make has to do with generations only.
6 An aging mechanism
In the algorithm described so far, the age indication is combined with the colors and we promote all objects
that survive one collection. This promotion policy is extremely primitive, and the question is whether a
parameterized promotion policy may help. To do that, we keep an age for each object, i.e., the number
of collections that it has survived. This age is initialized to 0 at creation and is incremented at sweep
time. We also x a predetermined parameter determining the threshold for promotion to the old generation.
After an object reaches the threshold, the sweep procedure stops incrementing its age. We chose to x a
predetermined threshold, but dynamic policies could easily be implemented.
Using the aging mechanism, old objects continue to be colored black. However, the trace colors reachable
objects black, whether they are young or old. Thus, a modication to sweep is required: sweep recolors
reachable objects, which are young (age less than the threshold), to the allocation color, and continues to
leave old objects black, and reclaim objects with the clear color. The pseudo-code for the sweep procedure
appears in Figure 5.
Several changes to the card marking mechanism are also required to support aging. Simple clearing of
the card marks at the beginning of each collection no longer works, since inter-generational pointers in the
current collection cycle may remain inter-generational pointers in the next cycle. Furthermore, we must also
ensure that inter-generational pointers are recorded correctly during the collection cycle. A race may occur
between setting and resetting the card marks. We elaborate on the race in the more technical section, see
Section 7 below.
At the beginning of a partial collection, the collector scans the card table and colors gray all young
objects referenced by pointers on dirty cards. If no young object is referenced from a given card, then the
collector clears the card's mark. Then the collector toggles the allocation and clear colors and continues with
the handshakes, trace and sweep.
For a full collection, the collector does not trace inter-generational pointers. Instead, it recolors all black
objects with the allocation color. Then it toggles the allocation and clear colors and continues with the
handshakes, trace and sweep. In the initialization done before a full collection (see InitFullCollection in
Figure
we do not clear the dirty bits. The reason is that they indicate dirty cards with inter-generational
pointers that may still be relevant in the following partial collections.
An implementation question is where to keep the age. One option is with the object, and the other is
in a separate table. We chose to keep it in a separate table. We did not have room in the objects headers.
More importantly, note that sweep (for both partial and full collections) goes through the ages of all objects
to increase them. Thus, for reasons of locality, it is better to go through a separate table, then to touch
all the objects in the heap. We keep a byte per age (although two or three bits are usually enough). We
could locate the age in the same byte with the card mark or with the color. However, that would require
synchronization while writing the byte, e.g., via a compare and swap instruction. Empirical checks show that
such synchronization is too costly for a typical Java application. Note that such a synchronized instruction
would be required for a good fraction of all pointer modications.
7 Some technical details
In this section we provide pseudo-code and some additional technical details. This paper is written so that
the reader may skip this section and still get a broad view of the collector.
Our purpose in presenting the code is to show how the generational mechanism ts into the DLG collector.
Thus, our presentation of the code concentrates on the details related to generations. We do not present
details of the mechanism for keeping track of the objects remaining to be traced, nor do we present details
of a thread-local allocation mechanism necessary to avoid synchronization between threads during object
allocation. See the DLG papers [11, 10] for the details of these mechanisms. One other dierence with DLG
is that we separate the handshake into two parts, postHandshake and waitHandshake, instead of using a
second collector thread.
Figure
1 shows the mutator routines, which are in
uenced by the collector: the write barrier (update
routine), object allocation (create routine), and the cooperate routine, which the mutator must call regularly
(e.g., backward branches and invocations). In the code the notation heap[x; i] denotes slot i of the object
at address x. Figure 2 shows the overall collection cycle and in Figure 3 we present routines called by the
collector . We refer to the code below.
We assume that the reader is familiar with the DLG collector [11, 10], and we use the following terminology
taken from their paper. The period between the rst handshake and the second is denoted sync1, the period
between the second handshake to the third is denoted sync2, and the rest of the time, i.e., after the third
handshake and up until the beginning of the next collection cycle is denoted async. Each mutator has its
own perception of these periods, depending on the times that it has cooperated with the handshake.
The most delicate issue for the generational collector is the proper handling of the card mark: how to set
and reset it, properly avoiding races and maintaining correctness. We partition the discussion to the simple
algorithm and to the aging algorithm. We assume a table with a designated byte for each card holding the
card mark. The byte does not have any other use.
7.1 The simple algorithm
First, we consider the handling of the card marks for the simplest algorithm, without the yellow color or
the color toggle, in particular the algorithm of Section 3. Using this algorithm, the collector marks all live
objects black and promotes them. Thus, an inter-generational pointer can be created only after trace is
complete. Thus, card marks can be cleared at the beginning of the cycle without fear of losing a mark due
to a race condition with a mutator.
Now we add the yellow color (Section 4). The collector does not trace objects, which are created yellow
during the cycle. Thus, it must keep a record of any pointer referencing a yellow object from any other
object. (Actually, we are only interested in pointers from black objects, but we do not perform this ltering
in our collector.) To solve the problem of keeping correct card marks for parents of yellow objects, it is
enough to make sure that the order of operations at the beginning of a collection cycle is as follows: scan the
card table and clear the dirty marks and only after that start creating yellow objects. Notice that ClearCards
(code in Figure 3) precedes SwitchAllocationClearColors (code in Figure 3) in the collection cycle (code in
Figure
2).
Next we add the color toggle (Section 5). There is a window of time between the check of an object A for
inter-generational pointers during the scan of the card table and the color toggle. If after the collector checks
A, a mutator creates a new inter-generational pointer in A referencing a yellow object B, the collector will
miss this pointer during the current collection. Furthermore, after the color toggle, the object B becomes
white (i.e., having the clear color) and it might be collected in the current (partial) collection.
To solve this, we make an exception to the treatment of yellow objects by the DLG write barrier and
treat them the same as white objects during sync1 and sync2 (between the rst and third handshakes). This
means that in this (usually short) period of time, whenever the DLG write barrier would shade a white
object gray, it will also shade a yellow object gray. See MarkGray in Figure 1.
An additional point that needs to be veried is that the tracing always terminates. Without the yellow
color modication, all (live) objects turn from white to gray and from gray to black. Since the number of
live objects is nite, all of them turn black in the end, and the tracing always terminates. This is still the
case here. A yellow object either stays yellow till the end of the trace, or it may turn gray and later black.
After performing these necessary modications, we note that there is no need for card marking during
sync1 and sync2. Thus, we get a small gain in e-ciency: card marking is required only during the async
stage. Notice that MarkCard is called only during async in the write barrier code in Figure 1.
To summarize, card marking occurs only during async. The clearing and checking of the card marks by
the collector is done after the rst handshake, and before the second handshake. After clearing the card
marks, the collector toggles the (clear and allocation) colors; thus, mutators create new objects with the
\yellow" color. Yellow objects may be shaded gray by the write barrier in sync1 and sync2.
7.2 The aging algorithm:
Next, we discuss the aging algorithm. Here, the collector must keep careful track of inter-generational
pointers during all collector stages. We have two concerns. First, the choice of which card marks to clear
If (statusm 6=async) then
MarkGray(heap[x,i])
MarkGray(y)
else if (Collector is tracing) then
MarkGray(heap[x,i])
MarkCard(x)
else
MarkCard(x)
Create:
Pick x 2 free.
allocationColor
Return x
Cooperate:
If (statusm 6= statusc) then
For each x 2 roots:
MarkGray(x)
statusm statusc
If
statusm 6= async) then
gray
Figure
1: The mutator routines
clear: If (full collection)
InitFullCollection
Handshake(sync1)
mark: postHandshake(sync2)
SwitchAllocationClearColors
waitHandshake
postHandshake(async)
mark global roots
waitHandshake
trace : While there is a gray object:
Pick a gray object x
MarkBlack(x)
For each object x in the heap:
blue
Figure
2: The collection cycle
For each card c:
If (dirty(c)) then
For each object x on c
If
gray
SwitchAllocationClearColors:
temp clearColor
clearColor allocationColor
allocationColor temp
InitFullCollection:
For each object x in the heap:
If
then
allocationColor
For each card c:
If (color(x) 6= black) then
For each pointer i 2 x do:
MarkGray(i)
black
Handshake:
waitHandshake
statusc s
waitHandshake:
For each m 2 mutators
wait for
Figure
3: The collector routines
must be done with care. Not all are reset. Second, at the same time that collector clears a card mark, a
mutator may set it. In this case, we must make sure that the card mark remains set if there is a pointer in
an object associated with this card to a young object.
To solve the rst problem, the mutators set the card mark throughout the collection, also during sync1
and sync2 (see Figure 4). In order to clear the card mark, the collector checks rst that no pointer to a
young object exists on the card, and then clears the mark. However, there could still be a race between the
clearing by the collector and the setting by the mutator.
In particular, the following interleaving of mutator and collector actions is problematic (say the dirty
mark in question is associated with card A):
1. The collector thread scans card A, nds out that there is no inter-generational pointer and determines
that the card's mark can be cleared.
2. Before the collector actually clears the mark, the program thread writes an inter-generational pointer
into A and sets the card mark.
3. The collector clears the card mark since its check from Step (1) allows this.
The outcome of this course of events is that an inter-generational pointer is now located on an unmarked
card. In the next (partial) collection, the referenced object may be skipped by the trace and reclaimed
although it is live. To solve this race we let the collector and mutator act as follows. The collector acts in
three steps instead of the naive two steps. In Step 1, the collector resets the card mark. In Step 2, it checks
whether the card mark can be cleared, i.e., whether there are no young objects referenced from A. Finally,
in Step 3, if the answer of Step 2 is \no", the collector sets the card mark back on. (This idea is encoded in
the ClearCards routine in Figure 6.) The update of the mutator involves two steps. In Step 1 it performs
the actual update, and in Step 2 it sets the card mark. The order of steps is important in both cases. (This
can be seen in the Update routine in Figure 4.)
We claim that the race is no longer destructive. Suppose a mutator is updating a slot on card A,
storing an inter-generational pointer. We assume that before the update the object did not contain other
inter-generational pointers; thus, it is crucial to get the new update noticed with respect to recording an
inter-generational pointer. At the same time, the collector is checking whether the dirty bit of A can be
erased and erases it if necessary. We assume that all processors see the stores of a particular processor in
the same order. There are two possible cases:
Case 1: The mutator sets the card mark before the collector clears it. Since the mutator
sets the mark after doing the actual update, the mutator must have performed the update before the
collector cleared the card mark. Since the collector checks for inter-generational pointers after clearing
the card mark, we get that the update was performed before the collector checked for inter-generational
pointers. Thus, the collector's check will nd the inter-generational pointer and the collector will set
the card mark.
Case 2: The mutator sets the card mark after the collector clears it. In this case, the card
mark will remain set as required.
In summary, if a new inter-generational pointer is created, then the card mark will be properly set and this
pointer will be noticed during subsequent collections.
8 Experimental results
Our goal is to compare the on-the-
y collector with and without generations, and to compare the eects of
choices for the parameters governing the generational version, e.g., size of cards, size of young generation,
use of aging, etc. We implemented both the original on-the-
y collector 2 and the generational on-the-
2 For a fair comparison, we also introduced a black-white color toggle in the original on-the-
y collector
If (statusm 6=async) then
MarkGray(heap[x,i])
MarkGray(y)
else if (Collector is tracing) then
MarkGray(heap[x,i])
MarkCard(x)
If
gray
Figure
4: Aging version: modied mutator routines
clear: If (full collection)
InitFullCollection
Handshake(sync1)
mark: postHandshake(sync2)
SwitchAllocationClearColors
waitHandshake
postHandshake(async)
mark global roots
waitHandshake
trace : While there is a gray object:
Pick a gray object x
MarkBlack(x)
For each object x in the heap:
blue
allocationColor
Figure
5: Aging version: The collection cycle
For each card c:
If (dirty(c)) then
For each object x on c
If
For each pointer i 2 x do:
MarkGray(i)
MarkCard(c)
InitFullCollection:
For each object x in the heap:
If
then
allocationColor
Figure
Aging version: modied collector routines
collector in a prototype AIX JDK 1.1.6 JVM. Measurements were done on a 4-way 332MHz IBM PowerPC
604e , with 512 MB main memory, running AIX 4.2.1. Additional measurements on a uniprocessor were run
on a PowerPC with 192 MB main memory, running AIX 4.2.
All runs were executed on a dedicated machine. Thus, although elapsed times are measured, the variance
between repeated runs is small. All runs were done with initial heap size of 1 MB and maximum heap size
of MB. The calculation of the trigger for a full collection was the same with and without generations. We
veried that the working set for all runs t in main memory, so that there were no eects due to paging.
8.1 Measuring elapsed time for an on-the-
y collector
A delicate point with an on-the-
y collector is how to measure its performance. If we run a single-threaded
application on a multiprocessor, then the garbage collector runs on a separate processor from the application.
If we measure the elapsed time for the application, we do not know how much time the collector has consumed
on the second processor.
In a real world, the server handles many processes and the second processor does not come for free. In
order to get a reasonable measure of how much CPU time the application plus the garbage collector actually
consume, we ran four simultaneous copies of the application on our 4-way multiprocessor. This ensured that
all the processors would be busy all the time, and the more e-cient garbage collector would win. Each
parallel run was repeated 8 times, and the average elapsed time was computed.
In addition, we measured the improvement of generational collection on a uniprocessor. This is not a
typical environment for an on-the-
y collector, but it was interesting to check whether generations help in
this case as well (and they usually do).
8.2 The benchmarks
Most of our benchmarks are taken from the SPECjvm benchmarks [25]. Descriptions of the benchmarks can
be found on the Spec web site [25]. We ran all the SPECjvm benchmarks from the command line and not
through the harness. For all tests we used the \-s100" parameter.
We also used two additional benchmarks. The rst is an IBM internal benchmark called Anagram [15].
This program implements an anagram generator using a simple, recursive routine to generate all permutations
No. of
threads
Impro-
vement 1.3% 2.6% 10.6% 16.0% 11.7%
Figure
7: Percentage improvement (elapsed time) for multithreaded Ray Tracer on a 4-way multiprocessor
Benchmark Multiprocessor Uniprocessor
Improvement Improvement
Anagram 25.0 % 32.7%
Figure
8: Percentage improvement for Anagram
of the characters in the input string. If all resulting words in a permuted string are found in the dictionary,
the permuted string is displayed. This program is collection-intensive, creating and freeing many strings.
The second is a code modication of the 227 mtrt [5] from the SPECjvm benchmarks [25] in order to
make it more interesting on a multiprocessor machine. The program 227 mtrt is a variant of a Ray tracer,
where two threads each render the scene in an input le, which is 340 KB in size [5]. 227 mtrt runs on
matrices of 200200 and uses 2 concurrent threads. We modied it to run on a bigger matrix of dimensions
300300 and we also parametrized the number of rendering threads. We call this modication multithreaded
Ray Tracer. The modied code is available on request for SPECjvm licensees.
8.3 The choice of parameters
For each application, a dierent choice of the parameters governing the generational collection seems to yield
best performance. On the average, the best choice of parameters turns out to be object marking (i.e., card
marking with 16 bytes per card) without the advanced aging mechanism and the best size of the young
generation turns out to be 4 megabytes (we also tried 1, 2 and 8 megabytes for the young generation). In
the next section (Section 8.4), we present results for this set of parameters. In Section 8.5 below, we justify
our choice by comparing the performance of the algorithm with aging and for various settings of the other
parameters.
8.4 The results
In
Figure
7 we present the percentage improvement for the multithreaded Ray Tracer benchmark, described
in Section 8.2 above. The number of application threads varies from 2 to 10. Generations perform very well
for it.
Next, in Figure 8, we present the improvement generational collection yields for the Anagram bench-
mark. Here, generational collection is also very benecial. In Figure 9 we examine the applications of the
SPECjvm benchmark. As one may see, for most applications generations do well. We omit the results for
the benchmarks 200 check and 222 mpegaudio, since they do not perform many garbage collections and
their performance is indierent to the collection method.
The performance of the benchmarks either gains a boost from generational collection or remains virtually
unchanged, except for two benchmarks, 202 jess and 228 jack, which suer a performance decrease.
To account for the dierences between the applications, we measured several runtime properties of these
applications. As expected, an application performs well with generational collection if many objects die
young and if pointers in the old generation do not get frequently modied. The decrease in performance
for 202 jess and 228 jack originates from several reasons, some of them are shown in our measurements:
First, the lifetime of objects was not typical to generations - they die soon after being promoted, unless
one makes a huge young generation. Second, for 202 jess 36.2% of the objects that are scanned during
partial collection are scanned because they are dirty objects in the old generation. This is a high cost for
Benchmark Multiprocessor Uniprocessor
Improvement Improvement
compress 0.0% 2.0%
jess -3.7% -2.5%
228 jack -2.12% -7.7%
Figure
9: Percentage improvement for SPECjvm benchmarks
Benchmark Percent time No. partial GC No. full GC Percent time GC No. of GC
GC active active w/o generations w/o generations
compress 1.7% 5 15 1.2% 17
jess 13.3% 70 2 14.8% 51
228 jack 7.7%
Anagram 62.8% 152 8 78.9% 56
Figure
10: Use of garbage collection in application.
manipulating inter-generational pointers. However, note that the success or failure of generational collector
is in
uenced also by factors that we did not measure. For example, the increased locality of the heap, caused
by frequent collections is hard to measure.
We now present measured properties from the runtime. In the remainder of this section, we present
measurements of the applications properties. These measures were taken on the multiprocessor in running
a single copy of the application. We start in Figure 10 with the amount of time spent on garbage collection.
These numbers indicate how much a change in the garbage collection mechanism may aect the overall
running time of the application. For example, the program that spends the most time garbage collecting
during the run is Anagram, whereas programs that spend a small part of their time in garbage collection
are 201 compress and 209 db. We also include the number of collection cycles executed in each of the
applications.
Next, in Figures 11 and 12 we measure the \generational behavior" of the benchmarks involved. In
particular, we measure how many objects are scanned during the collection, how many of them are scanned
due to inter-generational pointers and what percentage of the objects are freed. For partial collection, we
report what percent of the objects of the young generation that are collected. For the full collection, we
report what percentage of the allocated objects in the whole heap that are reclaimed (allocated objects are
counted as the sum of the objects freed and the objects that survive the collection). For example, in the
benchmark 201 compress, objects do not tend to die young. However, for most of the other applications
almost all objects die young. Next, we consider the maintenance of inter-generational pointers. We see,
for example, that for 202 jess 36.2% of the objects scanned during partial collection are dirty objects in
the old generation. This high cost for manipulating inter-generational pointers is one of the reasons for
the deterioration in performance. Finally, we look at how many objects are reclaimed in partial and in full
collections. For the applications 228 jack and 202 jess, objects that got tenured in the old generation did
not survive long. We can see that almost all objects were collected during the full collections. This non-
generational behavior is another reason why generations did not perform well for 202 jess and 228 jack. If
non-generational collections can free a similar percentage of objects as partial collections, then we do not gain
e-ciency with the partial collections, whereas we do pay the overhead cost for maintaining inter-generational
Avg. No. of old Avg. No. of Avg. No. of Avg. No. of
objects scanned objects scanned objects scanned objects scanned
for inter-gen partial full in collection
pointers collections collections w/o generations
compress 3 168 4789 4778
jess 1373 3797 25411 25446
228 jack 151 4890 14972 11241
Figure
Generational characterization of the applications - Part 1.
percentage of percentage of percentage of percentage of
bytes freed objects freed objects freed objects freed
in partial in partial in full in collections
collections collections collections w/o generations
compress 19.29% 40.43% 2.6% 2.3%
209 db 97.66% 99.77% 22.2% 43.1%
jess 98.02% 97.88% 87.2% 86.3%
213 javac 71.25% 68.67% 44.7% 26.8%
228 jack 91.63% 96.58% 90.8% 94.7%
Anagram 86.22% 93.43% 14.2% 13.2%
Figure
12: Generational characterization of the applications - Part 2.
pointers.
Next, in Figure 13 and Figure 14, we look at the cost and performance of partial and full collections for
the various benchmarks. The cost is the time required to run the collection, and the performance is the
number of objects collected (or their accumulated size). Note that for a mark and sweep algorithm, the cost
of sweep is similar for the partial and the full collections. It is only the tracing times that get shorter. Thus,
the partial collections take less time but not drastically less.
Figure
shows the number and types of collection cycles for the benchmarks. For all benchmarks the
number of full collections when using the generational collector is less than the number of full collections
when using the non-generational collector.
Finally, we examine the number of pages touched by the collector during the various collections, see
Figure
15. We measure the pages touched during trace and sweep, including all the tables the collector uses
(such as the card table.) Naturally, the number of pages touched during the partial collections are smaller
than the number of pages touched during full collections. The smallest ratio is for the Anagram benchmark,
where the number of pages touched during partial collections is about 20% of the number touched during
full collections. The largest ratio is for the 213 javac benchmark. There, the number of pages touched in
partial collections is about 70% of the number of pages touched during full collections. These positive results
match similar measurements in Demers, et al. [6].
8.5 Tuning parameters
In this section we explain the choice of parameters. We compare the various card sizes, the method of aging
versus the simple promotion method, and we evaluate various sizes for the young generation. For the aging
Avg. time Avg. time Avg. time
active partial active full active GC (ms)
GC (ms) GC (ms) w/o generations
compress 17 35 31
jess 61 116 87
228 jack
Anagram 52 429 346
Figure
13: Ellapsed time of collection cycles
Avg. No. of Avg. No. of Avg. No. of Avg. space Avg. space Avg. space
objects freed objects freed objects freed freed in freed in freed in
in partial in full in collection partial full in collection
collection collection w/o generations collection collection w/o generations
compress 112 112 111 1057472 6922551 67953331
jess 106185 166720 160458 3934524 6759448 5982237
228 jack 133671 186370 202109 3677861 6905298 5841292
Anagram 12251 30088 41370 3515684 13279332 12590566
Figure
14: Average gain from collections
Pages touched by w/o
partial full generations
compress 76 124 109
jess 1304 2227 2048
228 jack 1199 2052 1767
Anagram 1082 4938 5054
Figure
15: Average no. of pages touched by a GC
Number of threads
Block marking with 1m young generation -3.9 -8.8 5.0 9.0 8.2
Block marking with 2m young generation 0.8 -7.1 6.0 9.8 8.7
Block marking with 4m young generation 1.1 -2.5 6.6 9.8 7.4
Block marking with 8m young generation -0.9 4.7 7.7 10.9 8.8
Object marking with 1m young generation -4.7 -2.6 4.3 14.0 13.0
Object marking with 2m young generation 1.4 -4.4 5.9 11.3 8.6
Object marking with 4m young generation 1.3 2.6 10.6 16.0 11.7
Object marking with 8m young generation 1.9 8.0 13.2 18.8 15.4
Figure
Tuning the size of the young generation: percentage of improvement of generations for multi-threaded
Ray Tracer.
Block marking Object marking
Benchmark 1m 2m 4m 8m 1m 2m 4m 8m
compress -0.41 0.19 -0.05 0.46 -0.04 0.11 0.02 0.29
jess -22.44 -12.97 -5.05 -1.55 -13.77 -8.72 -3.7 -5.66
228 jack -12.14 -6.27 -2.83 -14.84 -6.85 -3.45 -2.12 -2.23
Anagram 14.43 30.03 37.17 38.73 -8.67 12.06 24.67 26.42
Figure
17: Tuning the size of the young generation: percentage of improvement of generations for the
SPECjvm benchmarks
method, we compare performance for various tenuring thresholds. The results are summarized in several
tables, as described below.
8.5.1 Size of the young generation
We begin by evaluating various sizes of the young generation. We compare the sizes 1, 2, 4, and 8 megabytes
as possible alternatives for the size of the young generation. We present measurements for the two extreme
cases of card sizes: block marking, where the card size is 4096 bytes, and object marking, where the card size
is bytes. We will see in Subsection 8.5.3 below that these card sizes are the best for most applications.
The results for multi-threaded Ray Tracer can be found in Figure 16 and for the SPECjvm benchmarks [25]
in
Figure
17. The results do not point a single best size for all benchmarks, but on the average, the best
performance is obtained for a size of 4 megabytes for the young generation. In the sequel we x the young
generation to 4 megabyte, except when evaluating the aging mechanism.
8.5.2 The aging mechanism
The results for aging are disappointing. as can be seen from the results in Figure 18 and Figure 19. We vary
the size of the young generation (1, 2, 4, and 8 megabytes) and the age threshold for promotion to the old
generation (4, 6, 8, and 10). Recall that an object is allocated with age 1, and its age gets increased for each
collection it survives. We chose the card size to be the smallest possible, which is justied by the analysis of
card sizes in Section 8.5.3 below.
Note that if we use the simple promotion mechanism, each object gets old at the age of 2. Thus, it is
possible to compare the overhead of the aging method itself by comparing the simple promotion mechanism
with aging having the old age being 2. It turns out that our aging method does have a big overhead. See
Figure
20. It shows the percentage of improvement (actually deterioration) when using aging with 2 ages
Age 4 is old Age 6 is old
Benchmark 1m 2m 4m 8m 1m 2m 4m 8m
jess -17.7 -15.8 -10.1 -7.8 -12.6 -13.7 -10.3 -9.2
209 db -2.4 -0.7 -1.4 -0.4 -3.1 -1.3 -1.1 -0.1
228 jack -11.4 -6.7 -1.8 -1.5 -12.6 -6.4 -2.5 -0.9
Anagram -10.8 1.9 20.0 29.6 -11.2 0.8 18.3 26.7
Figure
18: Percentage of improvement for the aging mechanism over a non-generational collector for the
SPECjvm benchmarks (part 1)
Object Mark With Aging
Age 8 is old Age 10 is old
Benchmark 1m 2m 4m 8m 1m 2m 4m 8m
compress
jess -14.6 -17.3 -5.1 -3.8 -17.6 -9.4 -4.9 -3.6
213 javac -27.0 -13.1 3.6 17.4 -33.5 -16.2 3.2 15.5
228 jack -11.6 -3.5 -2.0 -0.4 -14.4 -4.2 -2.6 -1.2
Anagram -11.8 -0.4 16.1 23.9 -11.7 -1.6 14.9 23.4
Figure
19: Percentage of improvement for the aging mechanism over a non-generational collector for the
SPECjvm benchmarks (part 2)
Benchmark 1m 2m 4m 8m
201 compress 0.09 -0.18 -0.97 -0.16
jess -3.21 -3.43 -3.54 -1.24
228 jack -3.01 -2.88 -1.48 0.40
Anagram -2.11 -9.10 -3.63 3.34
Figure
20: The percentage of improvement (or the cost) of the aging mechanism with 2 ages over the simple
promotion method.
instead of the standard method. (As before, we use object marking, i.e., the smallest card size.) It may be
possible to improve the performance of the aging algorithm by changing the algorithm or data structures.
This is something that we have not attempted in this work. Perhaps a simple modication, such as locating
the value of the age inside the object instead of keeping a table with the ages, may help by improving the
locality of reference. In light of the results, we have chosen not to use aging.
8.5.3 Choosing the size of the cards
Finally, we ran some measurements to nd out what the best card size is. We varied the size from 16 to
4096, including all powers of 2. The best card size depends on the behavior of the application. Note that
since we do not move objects in the heap, the objects of the young and old generations are not segregated.
There is an interesting phenomena about the scanning of the cards. If the dirty objects are concentrated
in the heap in a specic location (and it can be big or small), than smaller cards do not shorten the scan.
For example, if the rst 1/4 of the heap contains dirty objects, then if we take cards whose size is a quarter
of the heap or cards whose size is 16 bytes, then we'll have the same objects to actually scan on dirty cards.
However, if the dirty objects are spread randomly in the heap than rening the card sizes is useful. The
ner the cards are, the less objects we scan. Thus, the nature of the application determines how useful small
cards are.
But there are more considerations. For example, smaller cards imply a bigger card table. The card
table is accessed on each pointer modication and may in
uence the locality of reference. A big table that
is accessed frequently in a random manner decreases locality. Here, it seems that the consideration is the
opposite of the previous one. If the heap access of the application is randomly distributed, then a big table
is bad, so bigger cards are required. If the heap accesses are concentrated, then the access of the card table
will be concentrated even for a big table, so smaller cards are ne. The big question of which consideration
is dominant is the frequency of accesses. Note that a card gets dirty even if touched only once, and that is
the only relevant issue for the consideration of the previous paragraph. However, for locality of reference it
matters how frequently the cards are touched. The frequency may determine which of these considerations
wins and what card size is the best for the application.
The actual results are given in the following tables. In table 21 we specify the improvement of generational
collection versus non-generational collection for all benchmarks and the various card sizes. We used a young
generation of 4 megabytes and object marking. To get some impression of what in
uences the results we
also present Table 22 the percentage of cards that were dirty in the collection, and in Table 23 the area that
got scanned due to dirty cards.
In most cases, the size of the card did not make a signicant impact on the running time. The biggest
impact can be seen with the benchmarks Anagram, 213 javac, and 202 jess. The impact of card sizes on
these benchmark was not the same. For Anagram the bigger card size, the better. For 213 javac the smaller
the better, and for 202 jess the two extremes (16 and 4096 bytes) are best. We chose to use the smallest
card size (denoted object marking) for the rest of the tests.
Object Mark with 4m young generation
Benchmark byte byte byte byte byte byte byte byte byte
compress 0.11 0.16 0.10 -0.41 0.25 0.33 0.40 0.46 0.62
jess -4.25 -4.02 -6.64 -9.17 -7.24 -7.17 -6.96 -7.01 -6.65
228 jack -7.43 -6.24 -7.01 -6.12 -6.79 -7.16 -6.78 -6.72 -6.50
Anagram 23.61 18.92 24.04 28.59 31.35 33.09 33.41 34.48 35.24
Figure
21: Percentage of improvement for SPECjvm benchmarks for the various card sizes
Object Mark with 4m young generation
Benchmark byte byte byte byte byte byte byte byte byte
jess 15.81 30.70 42.85 50.16 53.43 56.65 59.46 59.08 61.18
228 jack 17.66 28.71 32.51 34.47 35.19 38.41 40.01 40.53 44.11
Anagram 1.14 0.78 2.07 1.22 1.22 1.25 1.22 1.23 1.31
Figure
22: Tuning the parameters:Card size - percentage of dirty cards from allocated cards
Looking at Tables 22 and 23, we see that there are almost no dirty cards scanned for Anagram, which
is one of the properties of Anagaram that make generational collection appropriate for it. Note that for
Anagram, it is best to have a large card size. This is probably due to the smaller card table, since it does
not in
uence the actual scanning, which is negligible. For 209 db the size of the card has practically no
in
uence on the size of the area scanned for collection. This is probably due to concentration of the dirty
objects as discussed above.
Object Mark with 4m young generation
Benchmark byte byte byte byte byte byte byte byte byte
jess 1237 2421 3426 3888 4191 4387 4499 4626 4780
228 jack 1309 2059 2319 2450 2562 2717 2821 2983 3226
Anagram 107 175 170 168 167 170 165 167 178
Figure
23: Tuning the parameters:Card size - Area scanned for dirty cards
9 Conclusion
We have presented a design for incorporating generations into an on-the-
y garbage collector for Java. To
the best of our knowledge such a combination has not been tried before. Our ndings imply that generations
are benecial in spite of the two \obstacles": the fact that the generations are not segregated in space since
objects are not moved by the collector, and the fact that obtaining shorter pauses for the collection are not
relevant for an on-the-
y collector.
It turns out that for most benchmarks the overall running time was reduced by up to 25%, but there was
one benchmark for which generational collection increased the overall running time on our multiprocessor
by 4%.
The best performing variant of generational collection out of the variants we checked, was the one with
the simplest promotion policy (promoting an object to the old generation after surviving one collection), a
quite big young generation (4 megabytes), and a small size of cards for the card marking algorithm (16 bytes
per card).
In most collections, less pages are touched by the generational collector. Thus, one should especially
consider using generations for an on-the-
y collector when the applications run in limited physical memory.
Acknowledgments
We thank Hans Bohm for helpful remarks. We thank Alain Azagury, Katherine Barabash, Bill Berg, John
Endicott, Michael Factor, Arv Fisher, Naama Kraus, Yossi Levanoni, Ethan Lewis, Eliot Salant, Dafna
Sheinwald, Ron Sivan, Sagi Snir, and Igor Yanover for helpful discussions.
--R
List processing in real-time on a serial computer
The Treadmill
Algorithms for on-the- y garbage collection
Combining generational and conservative garbage collection: Framework and implementations.
Experience with Concurrent Garbage Collector for Mudula-2+
unobtrusive garbage collection for multiprocessor systems.
A concurrent generational garbage collector for a multi-threaded implementation of ML
An exercise in proving parallel programs correct.
An Anagram Generator.
Garbage Collection: Algorithms for Automatic Dynamic Memory Manage- ment
Using a Color Toggle to Reduce Synchronization in the DLG Collector.
Garbage collection with multiple processes: an exercise in parallelism.
Very Concurrent Mark-Sweep Garbage Collection without Fine-Grain Synchronization
A Real Time Garbage Collector Based on the Lifetimes of Objects.
Garbage collection in a large LISP system.
A lifetime-based garbage collector for Lisp systems on general-purpose computers
Multiprocessing compactifying garbage collection.
Multiprocessing compactifying garbage collection.
Generation Scavenging: A Non-disruptive High Performance Storage Reclamation Algorithm
--TR
Algorithms for on-the-fly garbage collection
Combining generational and conservative garbage collection: framework and implementations
The treadmill
A comparative performance evaluation of write barrier implementation
A concurrent, generational garbage collector for a multithreaded implementation of ML
unobtrusive garbage collection for multiprocessor systems
Garbage collection
Very concurrent mark-MYAMPERSANDamp;-sweep garbage collection without fine-grain synchronization
A real-time garbage collector based on the lifetimes of objects
List processing in real time on a serial computer
On-the-fly garbage collection
An exercise in proving parallel programs correct
Multiprocessing compactifying garbage collection
On-the-Fly Garbage Collection
On-the-fly garbage collection
Garbage collection in a large LISP system
Garbage collection and task deletion in distributed applicative processing systems
Generation Scavenging
A Lifetime-based Garbage Collector for LISP Systems on General- Purpose Computers
--CTR
Perry Cheng , Guy E. Blelloch, A parallel, real-time garbage collector, ACM SIGPLAN Notices, v.36 n.5, p.125-136, May 2001
Hezi Azatchi , Yossi Levanoni , Harel Paz , Erez Petrank, An on-the-fly mark and sweep garbage collector based on sliding views, ACM SIGPLAN Notices, v.38 n.11, November
David F. Bacon , Perry Cheng , David Grove , Martin T. Vechev, Syncopation: generational real-time garbage collection in the metronome, ACM SIGPLAN Notices, v.40 n.7, July 2005
David Detlefs , Ross Knippel , William D. Clinger , Matthias Jacob, Concurrent Remembered Set Refinement in Generational Garbage Collection, Proceedings of the 2nd Java Virtual Machine Research and Technology Symposium, p.13-26, August 01-02, 2002
Katherine Barabash , Yoav Ossia , Erez Petrank, Mostly concurrent garbage collection revisited, ACM SIGPLAN Notices, v.38 n.11, November
Yoav Ossia , Ori Ben-Yitzhak , Irit Goft , Elliot K. Kolodner , Victor Leikehman , Avi Owshanko, A parallel, incremental and concurrent GC for servers, ACM SIGPLAN Notices, v.37 n.5, May 2002
Hang Pham, Controlling garbage collection and heap growth to reduce the execution time of Java applications, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.5, p.908-941, September 2006
Karen Zee , Martin Rinard, Write barrier removal by static analysis, ACM SIGPLAN Notices, v.37 n.11, November 2002
Katherine Barabash , Niv Buchbinder , Tamar Domani , Elliot K. Kolodner , Yoav Ossia , Shlomit S. Pinter , Janice Shepherd , Ron Sivan , Victor Umansky, Mostly accurate stack scanning, Proceedings of the JavaTM Virtual Machine Research and Technology Symposium on JavaTM Virtual Machine Research and Technology Symposium, p.19-19, April 23-24, 2001, Monterey, California
David F. Bacon , Clement R. Attanasio , Han B. Lee , V. T. Rajan , Stephen Smith, Java without the coffee breaks: a nonintrusive multiprocessor garbage collector, ACM SIGPLAN Notices, v.36 n.5, p.92-103, May 2001
Katherine Barabash , Ori Ben-Yitzhak , Irit Goft , Elliot K. Kolodner , Victor Leikehman , Yoav Ossia , Avi Owshanko , Erez Petrank, A parallel, incremental, mostly concurrent garbage collector for servers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.6, p.1097-1146, November 2005
Martin T. Vechev , Eran Yahav , David F. Bacon, Correctness-preserving derivation of concurrent garbage collection algorithms, ACM SIGPLAN Notices, v.41 n.6, June 2006
Antony L. Hosking, Portable, mostly-concurrent, mostly-copying garbage collection for multi-processors, Proceedings of the 2006 international symposium on Memory management, June 10-11, 2006, Ottawa, Ontario, Canada | memory management;programming languages;garbage collection;generational garbage collection |
350130 | Stochastic Grammatical Inference of Text Database Structure. | For a document collection in which structural elements are identified with markup, it is often necessary to construct a grammar retrospectively that constrains element nesting and ordering. This has been addressed by others as an application of grammatical inference. We describe an approach based on stochastic grammatical inference which scales more naturally to large data sets and produces models with richer semantics. We adopt an algorithm that produces stochastic finite automata and describe modifications that enable better interactive control of results. Our experimental evaluation uses four document collections with varying structure. | Introduction
1.1. Text Structure
For electronically stored text, there are well known advantages to identifying structural
elements (e.g., chapters, titles, paragraphs, footnotes) with descriptive markup
[4, 5, 12]. Most commonly, markup is in the form of labeled tags interleaved with
the text as in the following example:
!reference?The Art of War: Chapter 3 paragraph 18!/reference?
If you know the enemy and know yourself, you need not fear the
result of a hundred battles. !/sentence?
If you know yourself but not the enemy, for every victory gained
you will also suffer a defeat. !/sentence?
If you know neither the enemy nor yourself, you will succumb in
every battle. !/sentence?
Documents marked up in this way can be updated and interpreted much more robustly
than if the structural elements are identified with codes specific to a particular
system or typesetting style. The markup can also be used to support operations
such as searches that require words or phrases to occur within particular elements.
Further advantages can be gained by using a formal grammar to specify how and
where elements can be used. The above example, for instance, conforms to the
following grammar represented as a regular expression:
quotation ! reference sentence
This specifies that a quotation must contain a reference followed by one or more
sentences. Any other use of the elements, e.g. nesting a sentence in a reference
or preceding a reference by a sentence, is disallowed. Thus the main benefit of
having a grammar is that new or modified documents can be automatically verified
for compliance with the specification. Other benefits include support for queries
with structural conditions, and optimization of physical database layout. Another
important purpose of a grammar is to provide users with a general understanding
of the text's organization. Overall, a grammar for a text database serves much the
same purpose as a schema for a traditional database: it gives an overall description
of how the data is organized [18].
The most widely used standard for text element markup and grammar specification
is SGML (Standard Generalized Markup Language) [24], and more recently,
XML [7]. HTML represents a specific application of SGML, i.e., it uses a single
grammar and set of elements (although the grammar is not very restrictive).
1.2. Automated Document Recognition
Unfortunately, many electronic documents exist with elements and grammar only
implicitly defined, typically through layout or typesetting conventions. For exam-
ple, on the World Wide Web, data is laid out according to conventions that must be
inferred to use it as easily as if it were organized in a database [41]. There is therefore
a pressing need to convert the structural information in such documents to a
more explicit form in order to gain the full benefits of their online availability. Completely
manual conversion would be too time consuming for any collection larger
than a few kilobytes. Therefore, automated or interactive methods are needed for
two distinct sub-problems: element recognition and grammar generation.
If the document is represented by procedural or presentational markup, the first
sub-problem is to recognize and mark up individual structural elements based on
layout or typesetting clues. To do this, it is necessary to know or infer the original
conventions used to map element types to their layout. We do not address this
task here, but there are several existing approaches, based on interactive systems
[14, 26], on learning systems [32], and on manual trial and error construction of
finite state transducers [11, 25].
The second sub-problem is to extract implicit structural rules from a collection of
documents and model them with a grammar. This requires that a plausible model
of the original intentions of the authors be reconstructed by extrapolating from
available examples in some appropriate way. This can be considered an application
of grammatical inference - the general problem that deals with constructing
grammars consistent with training data [42].
GRAMMAR INFERENCE FOR TEXT 3
Note that the two problems may depend on each other. Element recognition
often requires that ambiguities be resolved by considering how elements are used
in context. However, recognition usually considers these usage rules in isolation,
and identifies only the ones that are really needed to recognize an element. A
grammar can be considered a single representation of all usage rules, including
the ones that are not relevant to recognition (which may be the majority). Thus,
even if certain components of the grammar need to be determined manually in the
recognition phase, grammar inference is still useful for automatically combining
these components and filling in the ones that were not needed for recognition.
The grammar inference problem is especially applicable to XML, an increasingly
important variant of SGML in which rich tagging is allowed without requiring a
DTD (grammar). In this case, there is no recognition subproblem and grammar
generation comprises the entire recognition problem.
The benefits of attaching a grammar to documents can be seen from the recent
experience with the database system Lore [31]. Lore manages semistructured
data, where relationships among elements are unconstrained. In place of schemas,
"DataGuides" are automatically generated to capture in a concise form the relationships
that occur [17]. These are then used for traditional schema roles such as
query optimization and aiding in query formulation. Interestingly, the DataGuide
for a tree-structured database is analogous to a context free grammar.
1.3. The Data
We describe our approach to this problem using the computerized version of
the Oxford English Dictionary (OED) [39]. This is a large document with complex
structure [34], containing over sixty types of elements that are heavily nested in over
two million element instances. Figure 1 lists some of the most common elements as
they appear in a typical entry. See the guide by Berg [6] for a complete explanation
of the structural elements used in the OED.
The OED has been converted from its original printed form, through an intermediate
keyed form with typesetting information, to a computerized form with
explicitly tagged structural elements [25]. The text is broken up into over two
hundred ninety thousand top level elements (dictionary entries). The grammar
inference problem can be considered as one of finding a complete grammar to describe
these top level entries, or it can be broken into many subproblems of finding
grammars for lower level elements such as quotation paragraphs or etymologies.
As a choice of data, the OED is a good representative of the general problem
of inferring grammars for text structure. It is at least as complex as any text in
two ways: the amount of structure information is extremely high, and the usage of
structural elements is quite variable.
The large amount of structure information is evidenced by the number of different
types of tags, and also by the high density of tagging as compared to most text
[35]. There are over sixty types of tags and the density of tagging is such that,
HEADWORD GROUP !HG?
Headword Lemma !HL?
Lookup Form !LF?.!/LF?
Stressed Form !SF?.!/SF?
Murray Form !MF?.!/MF?
End of Headword Lemma !/HL?
Murray Pronunciation !MPR?.!/MPR?
IPA Pronunciation !IPR?.!/IPR?
Part of Speech !PS?.!/PS?
Homonym Number !HO?.!/HO?
END OF HEADWORD GROUP !/HG?
VARIANT FORM LIST !VL?
Variant Date !VD?.!/VD?
Variant Form !VF?.!/VF?
END OF VARIANT FORM LIST !/VL?
Sense Number !#?
Definition !DEF?.!/DEF?
Quotation Paragraph !QP?
Earliest Quote !EQ?!Q?
Date !D?.!/D?
Author !A?.!/A?
Work !W?.!/W?
End of Earliest Quote !/Q?!/EQ?
Latest Quote !LQ?!Q?.!/Q?!/LQ?
(Obsolete Entries Only)
End of Quotation Paragraph !/QP?
Sub-Entry (Preceded by"Hence") !SE?
Bold Lemma (+similar tags !BL?.!/BL?
to those following Headword
End of Sub-Entry !/SE?
END OF SENSE(S) !/S0?!/S1?.!/S8?
END OF ENTRY !/E?
Figure
1. Common elements from the OED with their corresponding tags in the order they appear
in a typical dictionary entry. (This is reproduced from Berg [5] and represents a human-generated
high level description of the data.) Indentation denotes containment. Therefore, the headword
group, variant form list, etymology, and senses are top level elements of an entry; their children
are indented one step further; etc. Note that more than one sense element can occur at the top
level.
GRAMMAR INFERENCE FOR TEXT 5
even with all element names abbreviated to three or fewer characters, the amount
of tagging is comparable to the amount of text.
In most text, restrictions on element usage tend to vary between one extreme
where any element can occur anywhere (e.g. italics in HTML), to the other where
elements always occur in exactly the same way (e.g. a newspaper article which
always contains a title, followed by a byline, followed by a body). Neither of
these extremes is interesting for grammar inference. Element usage in the OED,
however, is constrained, yet quite variable. This is mainly a consequence of the
circumstances of its original publication. The compilation spanned 44 years, and
filled over 15,000 pages. There were seven different chief editors over the lifetime of
the project, most of which was completed before 1933 - well before the possibility
of computer support. The structure of the OED is remarkably consistent, but
variable enough to be appropriate for the problem: it can test the method but is
regular enough to make description by a grammar appropriate.
1.4. Text Structure and Grammatical Inference
We now demonstrate the use of marked up text from the OED as training data
for a grammatical inference process. Consider the structure of the two short OED
entries shown in Figure 2. These can be represented by the derivation trees, shown
in
Figure
3, where the nodes are labeled with their corresponding tag names. A
corresponding grammar representation is shown in Figure 4. Each production has
a left hand side corresponding to a non-leaf node, and a right hand side formed
from its immediate children. The number of times a production occurs in the data
is indicated by a preceding frequency. A non-terminal and all of its strings (right
hand sides of its productions) can now be considered a training set for a single sub-
problem. Note that if we generalize each such production to a regular expression
then we have an overall grammar that is context free [28]. This is the standard
choice for modeling text structure.
Three existing grammatical inference approaches for text operate by specifying
rules that are used to rewrite and combine strings in the training set [10, 14, 38].
The following rule, for example, generalizes a grammar by expanding the language
that it accepts:
greater than or equal to a given value
Applied to the first entry in Figure 2 with
for example. Other rules have no effect on the language, but simplify a grammar
by combining productions:
Applied to the two HG productions in Figure 2, for example, the first rule gives
!MF?salama&sd.ndrous !/MF?!/HL?!b?,!/b? !PS?a.!/PS?!/HG?
!LB?rare!/LB?!su?-1!/su?. !ET?f. !L? L.!/L? !CF?
!XL?-ous!/XL?!/XR?.!/ET? !S4?!S6?!DEF?Living as it were
in fire; fiery, hot, passionate. !/DEF?!QP?!Q?!D?1711!/D?
!A?G. Cary!/A? !W?Phys. Phyl.!/W? 29 !T?My Salamandrous
Spirit.my &Ae.tnous burning Humours.!/T?!/Q?!/QP?!/S6?
!W?Expos. Dom. Epist. &. Gosp.!/W? Wks. (1629) 76
!T?If a Salamandry spirit should traduce that godly labour,
as the silenced Ministers haue wronged our Communion Booke.
!MF?u&sd.nderstrife !/MF?!/HL?!b?.!/b? !/HG?!LB?poet.!/LB?
!/XR?!/ET? !S4?!S6?!DEF?Strife carried on upon the earth.
!/DEF?!QP?!EQ?!Q?!D?C. 1611!/D? !A?Chapman!/A? !W?Iliad!/W?
xx. 138 !T?We soon shall.send them to heaven, to settle
their abode With equals, flying under&dubh.strifes.!/T?
Figure
2. Two marked up dictionary entries.
Work by Ahonen, Mannila and Nikunen [1, 2, 3] uses a more classical grammatical
inference approach of generating a language belonging to a characterizable subclass
of the regular languages. Specifically, they use languages, an
extension that they propose to k-contextual languages.
In contrast to previous approaches to grammar construction for text structure, we
use the frequency information from the training set to generate a stochastic gram-
mar. The non-stochastic approaches mentioned above are inappropriate for larger
document collections with complex structure. This is because there is no way to
effectively deal with the inevitable errors and pathological exceptions in such cases
without considering frequency information. Stochastic grammars are also better
suited for understanding and exploration since they express additional semantics
and provide a natural way of distinguishing the more significant components of
the grammar. The thesis by Ahonen [1] does partially address such concerns. For
example, when applying her method to a Finnish dictionary of similar complexity
to the OED, she first removed all but the most frequent cases from the training set.
GRAMMAR INFERENCE FOR TEXT 7
daVinci
V1.4.2
QP
A
PS
MF
QP
A
XR
XR
CF
su
PS
MF
daVinci
V2.0.3 T
A
QP
HO
XR
MF
Figure
3. The two parse trees.
She also proposes some ad-hoc methods for separating the final result into high and
low frequency components (after having generated the result with a method that
does not consider frequency information).
We assert that it is better to use an inference method that considers frequency
information from the beginning. The stochastic grammatical inference algorithm
that we have chosen to adopt for this application was proposed by Carrasco and
Oncina [9]. Our modifications are motivated by shortcomings in the ability to tune
the algorithm and by our wish to use it interactively for exploration rather than as a
black box to produce a single, final result. Note that understanding techniques are
Figure
4. The de-facto grammar with production frequencies.
needed to support effective interaction. The primary technique used for examples
in this paper is visualizing automata as bubble diagrams. All bubble diagrams
were generated by the graph visualization program daVinci which performs node
placement and edge crossover minimization [15].
The remainder of the paper is as follows: Section 2 describes the underlying
algorithm, Section 3 explains our modifications, Section 4 evaluates the method on
real and synthetic data, and Section 5 concludes.
2. Algorithm
Here we provide an overview of the algorithm by Carrasco and Oncina [9] in enough
detail to explain our modifications. A longer technical report includes more detailed
descriptions of both the modified and unmodified algorithms [43]. Of the many possible
stochastic grammatical inference algorithms, the particular one used here was
chosen for several reasons. First of all, it is similar to the method of Ahonen et al. in
that it uses a finite automaton state merging paradigm. Since that work represents
the most in-depth examination of grammar inference for text structure to date, it is
reasonable to use a similar approach. In fact, many of their results that go beyond
just the application of the algorithm (such as rewriting the automaton into a gram-
GRAMMAR INFERENCE FOR TEXT 9
mar consisting of productions) can be adapted to the outputs of our algorithm.
The second reason for choosing this algorithm was that the basic generalization
operation of merging states is guided by a justifiable statistical test rather than an
arbitrary heuristic. The Bayesian model merging approach of Stolcke and Omohundro
[40] or a probability estimation approach based on the forward-backward
algorithm [23, 30] were other candidates satisfying this characteristic, but the final
choice was judged the simplest and most elegant.
2.1. ALERGIA
The algorithm ALERGIA by Carrasco and Oncina [9] is itself an adaptation of a
non-stochastic method by Oncina and Garc'ia [33].
The algorithm produces stochastic finite automata (SFAs). These grammar constructs
can be informally explained as finite automata that have probabilities associated
with their transitions. The probability assigned to a string is, therefore,
the product of the probabilities of all transitions followed in accepting it. Note
that every state also has an associated termination probability, and that this is
included in the product. Any state with a non-zero termination probability can be
considered a final state. See the book by Fu [16] for a more formal and complete
description of SFAs and their properties.
The inference paradigm used by ALERGIA is a common one: first build a de-facto
model that accepts exactly the language of the training set; then generalize.
Generalization for finite automata is done by merging states. This is similar to the
state merging operation used in the algorithm for minimizing a non-stochastic finite
automaton [21]. The difference is that merges that change the accepted language
are allowed.
Consider, as an example, the productions for ET from the training data of Figure 4.
These can be represented by the prefix tree in Figure 5. The primitive operation
of merging two states replaces them with a single state, labeled by convention with
the smaller of the two state identifiers. All incoming and outgoing transitions are
combined and the frequencies associated with any transitions they have in common
are added, as are the incoming and termination frequencies. Figure 6 demonstrates
the effect of merging states 2 and 4, then 2 and 5.
Note that if two states have outgoing transitions with the same symbol but different
destinations, these two destinations are also merged to avoid indeterminism.
Thus, merging two non-leaf states can recursively require merging of a long string
of their descendants.
An algorithm based on this paradigm must define two things: how to test whether
two given states should be merged, and the order in which pairs of states are tested.
In ALERGIA, two states are merged if they satisfy the following equivalence
criterion: for every symbol in the alphabet, the associated transition probabilities
from the states are equal; the termination probabilities for the states are equal;
and, the destination states of the two transitions for each symbol are equivalent
according to a recursive application of the same criterion.
Figure
5. A de-facto SFA (prefix tree) for the ET element. States are labeled ID[N,T] where ID is
the state identifier, N is the incoming frequency, and T is the termination frequency. Transitions
are labeled S[F], where S is the symbol, and F is the transition frequency. Final states with
non-zero termination frequencies are marked with double rings.
Figure
6. Figure 4 with states 2,4 and 5 merged.
GRAMMAR INFERENCE FOR TEXT 11
Whether two transitions' probabilities are equal is decided with a statistical test of
the observed frequencies. Let the null hypothesis H o be that they are equal and the
alternative H a be that they differ. Let be the number of strings that arrive at
the states and f 1 be the number of strings that follow the transitions in question
(or terminate). Then, using the Hoeffding bound [19] on binomial distributions, the
p-value is less than a chosen significance level ff if the test statistic
is greater than the expression
In this case, reject the null hypothesis and assume the two probabilities are different,
otherwise assume they are the same. This test ensures that the chosen ff represents a
bound on the probability of incorrectly rejecting the null hypothesis (i.e. incorrectly
leaving two equivalent nodes separate). Thus, reducing ff makes merges more likely
and results in smaller models.
The order in which pairs of nodes are compared is defined as follows: nodes are
numbered in a breadth-first order with all nodes at a given depth ordered lexically
based on the string prefixes used to reach the node. Figure 5 is an example. Pairs
of nodes (q are tested by varying j from 1 to the number of nodes, and i from 0
to 1. For the non-stochastic version of the algorithm, this ordering is necessary
to prove identification in the limit [33]. Its significance in the stochastic version is
unclear.
Note that the worst case time complexity of the algorithm is O(n 3 ). This occurs
for an input where no merges take place, thus requiring that all n(n \Gamma 1)=2 pairs
of nodes be compared, and furthermore, where the average recursive comparison
continues to O(n) depth. In practise, the expected running time is likely to be much
less than this. Carrasco and Oncina report that, experimentally, the time increases
linearly with the number of strings in the training set, as artificially generated
by a fixed size model. We have observed a quadratic increase with the size of
model. However, since this size is normally chosen, through parameter adjustment,
to be small enough for understanding, running time has not been a problem in our
experience.
3. Modifications
In this section we present our modifications to the algorithm, using the PQP
(pseudo-quotation paragraph) element from the OED as an example. Of the
145,289 instances of that element in the entire dictionary, there are 90 unique arrangements
for subelements; thus there are 90 unique strings that appear as right
sides of productions in the defacto grammar. Those are shown in Figure 7 along
with their occurrence frequencies. The 4 elements that occur in a PQP are the
(earliest quotation), Q (quotation), SQ (subsidiary quotation), and LQ (latest
quotation). The usage of the elements (which is known from the dictionary but
can also be deduced from the examples) is as follows: SQ can occur any number
of times and in any position; Q can occur any number of times; EQ can occur at
most once and must occur before the first Q; LQ can occur at most once and must
occur after the last Q. This data is very simple and intended only to illustrate the
modified learning algorithm. We give more complex examples in Section 4.2.
3.1. Separation of Low Frequency Components
The original algorithm assumes that every node has a frequency high enough to
be statistically compared. This is not typically valid. Nodes with too low a frequency
always default to the null hypothesis of equivalence, resulting in inappropriate
merges. The characteristic result is that many low frequency nodes merge with
either the root or another low index node (since the comparisons are made in order
of index). This gives a structure with many inappropriate transitions pointing back
to these low index nodes. Figure 8 shows an example result for the PQP data where
transitions occur from several parts of the model to nodes 0 and 1.
Note that this form of inappropriate merging is not a problem that can be remedied
just by tuning the single parameter ff. As is usual in hypothesis tests, ff is
a bound on the probability of a false reject of the null hypothesis (i.e. failing to
merge two nodes that are in fact equivalent). The complementary bound fi on the
probability of a false accept is unconstrained by the test of ALERGIA, and can be
arbitrarily high for very low frequencies.
The problem can be seen as closely related to the small disjuncts problem discussed
by Holte et al. [20] for rule based classification algorithms: essentially, rules
covering only a few cases of the training data perform relatively badly since they
have inadequate statistical support. Holte et al. give three general approaches for
improving the situation: 1) use exact significance tests in the learning algorithm,
test both significance and error rate of every disjunct in evaluating a result, and
whenever possible, use errors of omission instead of default classifications. Note
that since our training set includes positive examples only, the second point does
not apply. The first point, however corresponds to one of our modifications, and
the third agrees with our own conclusions.
Experiments with several different treatments for low frequency nodes led us to
the conclusion that no single approach would always produce an appropriate result
(certainly not the original action of the algorithm - automatically merging on the
first comparison). This is understandable given that the frequencies in question are
statistically insignificant. Therefore, we chose to first incorporate a significance test
into the algorithm to separate out the low frequency nodes automatically, and then
later decide on alternative treatments for these nodes (discussed in Section 3.4).
The following test is the standard one for checking equivalence of two binomial
proportions while considering significance (see [13], for example). Assume as before
GRAMMAR INFERENCE FOR TEXT 13
21: 3: Q,Q,SQ,Q
524: EQ 4: Q,Q,SQ,Q,Q
5: EQ,LQ 3: Q,Q,SQ,Q,Q,Q
294: EQ,Q 2: Q,Q,SQ,Q,Q,Q,Q
1: EQ,Q,LQ 58: Q,SQ
30: EQ,Q,Q,Q,Q
1: EQ,Q,Q,Q,Q,LQ 9: Q,SQ,Q,Q,Q
15: EQ,Q,Q,Q,Q,Q 2: Q,SQ,Q,Q,Q,Q
8: EQ,Q,Q,Q,Q,Q,Q 3: Q,SQ,Q,Q,Q,Q,Q
5: EQ,Q,Q,Q,Q,Q,Q,Q 2: Q,SQ,Q,Q,Q,Q,Q,Q
3: EQ,Q,Q,Q,Q,Q,Q,Q,Q 1: Q,SQ,Q,Q,Q,Q,Q,Q,Q
3: EQ,Q,Q,Q,Q,Q,Q,Q,Q,Q 1: Q,SQ,Q,Q,Q,Q,Q,Q,Q,Q,Q
1: EQ,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q 2: Q,SQ,SQ
2: EQ,SQ 22: SQ
28: LQ 2: SQ,EQ
20: Q,LQ 5: SQ,EQ,Q,Q
3: SQ,EQ,Q,Q,Q
5: Q,Q,Q,LQ 174: SQ,Q,Q
6335: Q,Q,Q,Q 177: SQ,Q,Q,Q
1: Q,Q,Q,Q,LQ 102: SQ,Q,Q,Q,Q
579: Q,Q,Q,Q,Q,Q,Q 9: SQ,Q,Q,Q,Q,Q,Q,Q
293: Q,Q,Q,Q,Q,Q,Q,Q 8: SQ,Q,Q,Q,Q,Q,Q,Q,Q
124: Q,Q,Q,Q,Q,Q,Q,Q,Q 2: SQ,Q,Q,Q,Q,Q,Q,Q,Q,Q
93: Q,Q,Q,Q,Q,Q,Q,Q,Q,Q 2: SQ,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q
2: SQ,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q
4: Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q 2: SQ,Q,Q,SQ
2: Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q
1: Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q,Q
2: Q,Q,Q,Q,Q,SQ
1: Q,Q,Q,Q,SQ,Q,Q,Q,Q,Q
4: Q,Q,Q,SQ
2: Q,Q,Q,SQ,Q,Q
1: Q,Q,Q,SQ,Q,Q,Q
1: Q,Q,Q,SQ,Q,Q,Q,Q,Q
1: Q,Q,Q,SQ,Q,Q,Q,Q,Q,Q
17: Q,Q,SQ
Figure
7. The PQP example strings.
daVinci
V1.4.2
30[886,45.5]
Q[54.5]
7[142,58.5]
6[78455,56.3]
Figure
8. A result from the unmodified algorithm with transitions characteristic of inappropriate
low frequency merges. Note that termination and transition frequencies f i
are shown converted
to percentages representing f i =n i in this and subsequent figures to simplify comparisons.
GRAMMAR INFERENCE FOR TEXT 15
that true probabilities and that f 1 =n 1 , f 2 =n 2 serve
as the estimates. Sample sizes are required to satisfy the following relationship with
ff and fi (which bound the probabilities of a false reject or a false accept of the null
ae 2ffl
where z x denotes the t value at which the cumulative probability of a standard
normal distribution is equal to 1 \Gamma x; and,
The value ffl is an additional parameter required to bound fi (representing the minimum
true difference between p1 and p2). The null hypothesis is rejected iff
We incorporate this test into the algorithm in the following way. Associate a
boolean flag with each node, initially false; and, set the flag to true the first time
a node is involved in a comparison with another node that satisfies the required
relationship between ff, fi, and sample sizes. Nodes that still have false flags when
the algorithm terminates are classified as low frequency components. An example
result with the PQP data is shown in Figure 9. Low frequency nodes in the graph
are depicted as rectangles.
3.2. Control over the Level of Generalization
An important interactive operation is control over the level of generalization (how
much the finite language represented by the training set is expanded). One possible
approach is to vary ff and fi. Reducing ff increases generalization: it restricts
the possibility of incorrectly leaving nodes separate, and therefore makes merges
more likely. Increasing fi also increases generalization: it increases the allowable
possibility of incorrectly merging nodes. Note that these two are not completely
equivalent since ff and fi are bounds on the probabilities of their respective errors.
Unfortunately, it is not appropriate to control generalization in this way since
ff and fi directly determine which components of the data are treated as too low
frequency to be significant (i.e. the parts that will be merged by default using
the original algorithm, or classified as low frequency according to the test in Section
3.1). Therefore, unless we are in a position to arbitrarily vary the amount
of available data according to our choice of parameters, another modification is
needed.
The goal is to allow independent control over the division into low and high frequency
components, and over the level of generalization. This is done by changing
99[3,33.3]
95[4,25.0]
87[6,33.3]
30[374,47.3]
Q[90.3]
Q[64.3]
Q[87.5]
19[2,100.0]
98[2,50.0]
94[2,0.0]
90[2,0.0]
82[3,33.3]
Q[58.3]
50[16,25.0]
28[42,31.0]
Figure
9. The PQP inference result with
GRAMMAR INFERENCE FOR TEXT 17
the hypothesis for the statistical test. Rather than testing whether two observed
proportions can plausibly be equal, test whether they can plausibly differ by less
than some parameter fl:
The modified test is as follows: let - 1 be f 1 =n 1 , and - 2 be f 2 =n 2 . Then, if
Reject H
A larger value of fl results in a null hypothesis that is easier to satisfy, therefore
producing more merges and an increase in generalization. As an example, consider
the 3 results in Figures 10, 11, and 12 with constant ff and fi values, but varying fl
values (and low frequency components clipped out for the moment). Higher fl values
result in fewer nodes, larger languages, and less precise probability predictions.
3.3. Choosing Algorithm Parameters
The modified algorithm has the following parameters:
ffl fl is the maximum difference in true proportions for which the algorithm should
merge two states.
ffl ff is the probability bound on the chance of making a type I error (incorrectly
concluding that the two proportions differ by more than fl)
ffl fi is the probability bound on the chance of making a type II error (incor-
rectly concluding that the two proportions differ by less than fl) when the true
difference in the proportions is at least fl being the fourth parameter)
We next describe the effects of changing these parameters and also explain that
useful interaction does not necessarily require separate control over all four.
Choosing fl controls the amount of generalization. Setting it to 0 results in very
few states being merged; setting it to 1 always results in an output with a single state
(effectively a 0-context average of the frequency of occurrence of every symbol).
Changes to ff and fi also affect the level of generalization. Their main effect of
interest, however, is that they define the cutoff between high and low frequency
components. Increasing either one decreases the number of nodes classified as low
frequency. For simplified interaction, it is possible to always have both equal and
daVinci
V2.0.3
Q[90.3]
9[62963,
GRAMMAR INFERENCE FOR TEXT 19
daVinci
V2.0.3
Q[90.3]
daVinci
V2.0.3
Figure
12. The PQP inference result with
adjust them together as a single parameter. This does not seriously reduce the
useful level of control over the algorithm's behavior.
The ffl parameter determines the difference to which the fi probability applies.
This must be specified somewhere but is not an especially useful value over which
to have control. It should therefore be fixed, or tied in some way to the size of the
input and the value of fl (we choose to fix it).
it can be seen that control is only really needed over two major
aspects of the inference process. Choosing a combined value for ff and fi sets
the cutoff point between the significant data and the low frequency components.
Choosing fl controls the amount of generalization.
As an example of parameter adjustment, consider the inference result from Figure
9. Examination reveals two possible changes. The first is based on the observation
that nodes 1 and 3 are very similar: they both accept an LQ or SQ or any
number of Q's, and their transition and termination probabilities all differ by less
than ten percent. Unless these slight differences are deemed significant enough to
represent in the model, it is better to merge the two nodes. This can be done by
increasing fl to 0.1, thus allowing nodes to test equal if their true probabilities differ
by up to ten percent.
The second adjustment affects nodes 4, 12 and 21. These express the fact that
strings starting with an SQ are much more likely to end with more than two Q's.
This rule only applies, however, to about five hundred of the over one hundred
GRAMMAR INFERENCE FOR TEXT 21
Q[90.3]
30[374,47.3]
87[6,33.3]
95[4,25.0]
99[3,33.3]
Q[87.5]
Q[64.3]
7[142,58.5]
19[2,100.0]
28[42,31.0]
50[16,25.0]
Q[58.3]
82[3,33.3]
90[2,0.0]
94[2,0.0]
98[2,50.0]
Figure
13. The PQP inference result with
forty five thousand PQPs in the dictionary. If we choose to simplify the model
at the expense of a small amount of inaccuracy for these cases, we can reduce ff
and fi to reclassify these nodes as low frequency. Bisection search of values of ff
and fi between 0 and 0.025 reveals that this is accomplished with
The result after application of the two adjustments described above is shown in
Figure
13.
3.4. What to do with the Low Frequency Components
There are three possible ways of treating low frequency components: assume the
most specific possible model by leaving the components separate (this is the same as
leaving fi fixed and allowing ff to grow arbitrarily high); merge all the low frequency
components into a single garbage state (an approach adopted in [36]); or, merge
low frequency nodes into other parts of the automaton. Many methods can be
invented for the last approach. We have observed that, in general, a single method
does not produce appropriate results for all components of a given model. We
therefore propose a tentative merging strategy. First an ordered list of heuristics is
defined. Then all low frequency components are merged into positions in the model
determined by the first heuristic in the list. If the user identifies a problem with
a particular resulting tentative transition then the subtree can be re-merged into a
position determined by the next heuristic in the list.
Heuristics can be designed based on various grammatical inference or learning
approaches. Note that the problem of choosing a place to merge a low frequency
component differs from the general problem of stochastic grammatical inference
in two ways: 1) the rest of the high frequency model is available as a source of
information, and 2) the frequency information has been classified as insignificant.
The second point implies that, if we choose to consider frequency information, we
may have to use special techniques to compensate. These could include a Laplace
approximation of the probability or a Bayesian approach using a prior probability.
Evidence measures developed for recent work in DFA (rather than SFA) learning
might also be applicable [29].
We mention two heuristics that do not use frequency information but that we
have found to work well. Both guarantee that the model is still able to parse all
strings in the training set. The first is to merge every low frequency node with its
immediate parent. The result is that any terminals occurring in a low frequency
subtree are allowed to occur any number of times and in any order starting at the
nearest high-frequency ancestor. The second heuristic is to locate a position in the
high frequency model from which an entire low-frequency subtree can be parsed.
This subtree can then be merged into the rest of the model by replacing it with a
single transition to the identified position. If more than one possible position exists,
these can be stepped through before proceeding to the next heuristic in the list.
As an example of the application of the second heuristic reconsider Figure 13.
Merging every low frequency tree in that graph into the first (lowest index) node
that can parse it gives the result in Figure 14. Tentative transitions in that diagram
GRAMMAR INFERENCE FOR TEXT 23
daVinci
V1.4.2
Figure
14. Figure 13 with low frequency components merged into other parts of the graph.
are marked with dashed lines. The tentative transition from node 1 to 0 on input
of SQ creates a cycle that allows more than one EQ to occur. This violates proper
usage of that element as outlined in Section 3. Re-pointing the transition to node 1,
an alternate destination that parses the low frequency subtree, gives an acceptable
result for the PQP element.
4. Evaluation
In this section we compare the modified algorithm (henceforth referred to as mod-
ALERGIA) with ALERGIA. First, using data drawn from four different texts, we
compare performance with automatic searches. Then we use two specific examples
to illustrate some other points of comparison.
4.1. Batch Experiments
We use the following four texts for the automatic search experiments:
ffl OED - the Oxford English Dictionary [39]. This is over 500 Mb and exhibits
complex, sometimes irregular, structure.
pharmaceutical database which is an electronic version of a publication
that describes all drugs available in Canada [27]. This is exhibits a
mix of simple and complex structure.
ffl OALD - the Oxford Advanced Learner's Dictionary [22]. This is 17 Mb and
exhibits complex structure that is more regular than the OED having been
designed from the beginning for computerization.
ffl HOWTO - the SGML versions of HOWTO documents for the Linux operating
system. This is 10 Mb and exhibits relatively simple structure.
Terminal structural elements and non-terminals with very little sub-structure are
not worth performing inference on. As an arbitrary cutoff, we discard those that
give de-facto automatons with fewer than 10 states. This leaves 24 elements in
OED, 24 in CPS, 23 in OALD, and 14 in HOWTO for a total of 85 data sets. The
procedure for each data set is as follows:
1. Randomly split the strings into equal size training, validation, and test sets.
2. Let x be the size (number of states) of the de-facto automaton for the training
set. Generate a collection of models of various sizes using two methods:
(A) Test x parameter values using ALERGIA. (This is enough to find most of
the possible models.)
Test using mod-ALERGIA with
(which behaves the same as ALERGIA). Test the remaining x=2 parameter
varying ff; fi and fl. Merge low frequency components with their immediate
parents.
3. Assess goodness of a model using metric values calculated against the validation
set (see below). For each unique model size (number of states), keep the best
model generated by each method.
4. Recalculate the metrics for the selected models against the test set and compare
the results of the two methods.
Two different metrics are used. The first is cross entropy (also called Kullback-Liebler
divergence) which quantifies the probabilistic fit of one model to another.
This measure has previously been used to evaluate stochastic grammatical inference
methods [37]. Given two probabilistic language models, M 1 and M 2 , the cross
entropy is defined as:
where PM1 (x) is the probability in the set and PM2 (x) is the probability assigned
by the model. We estimate this with L first equal to the validation set and later
the test set.
Some strings in the validation and test sets are not recognized by the model.
We find the total probability of these and call it the error. (It corresponds to the
probability of rejecting a valid string if the SFA is stripped of its probabilities and
GRAMMAR INFERENCE FOR TEXT 25
model size both A B
54.3
Figure
15. Average percentage of each size interval covered. For example, 54.3 percent of interval
[2,16] means an average of models in that interval. The "both" column gives
models generated by both methods. The "A" and "B" columns give models generated just by the
ALERGIA and mod-ALERGIA searches.
used as a DFA.) We use this as the second metric. Also, rather than using an
infinite log value in these cases to give an infinite cross entropy, we use the largest
log value observed in the data set. This gives a finite metric in which a missing
string adds a penalty equal to the worst present string.
Values of ff; fi, and fl are chosen by repeatedly scanning the unit interval with
successively smaller increments until the chosen number of points have been tested.
So, for example, the first pass starts at 0.5 and increments by 1.0 thus testing only
one point; the second starts at 0.25 and increments by 0.5 testing two; and so
on. Values for fl are used directly, while ff and fi values are squared first to allow
probability differences that are more evenly spaced (recall the equivalence test used
for unmodified ALERGIA.)
Since different ranges of model sizes are useful for different purposes, we break
results up into the following size intervals: [2,16], [17,32], [33,64], and [65,128] (size
1 is excluded because all size 1 models for a given element are the same). We also try
to ensure that each search method tests an approximately equal number of points
in each interval. This is done by recording the parameter intervals corresponding to
the model size intervals and avoiding parameters in an interval once enough points
have been tested.
On a SUN supersparc, the runs averaged 11 minutes per element over the 85
elements. A total of 2872 models of different sizes were generated. Of these, 2263
were found by both searches, 27 were found only by the ALERGIA search, and
were found only by the mod-ALERGIA search. Figure 15 shows the average
percentage of each size interval covered. In general, the fact that mod-ALERGIA
finds a significant number of models that ALERGIA does not makes it more useful
when searching for a model of a particular size (we give an example in the next
section).
Figure
compares cross entropy and error for the 2263 model sizes found by
both methods across the 85 data sets. Improvements diminish for larger models.
This is because larger models keep more high frequency nodes from the de-facto
automaton. These high frequency nodes tend to be statistically significant and
model size cross entropy error
Figure
16. Average percent improvements of mod-ALERGIA over ALERGIA in cross entropy and
error for models in different size intervals. The bracketed numbers are p-values for a one sided t
test that the difference is greater than 0.
avg. node freq. cross entropy error
12-20 4.1 (5e-15) 4.8 (4e-17)
Figure
17. Average percent improvement of mod-ALERGIA over ALERGIA in cross entropy
and error for models generated from de-facto automata with various average node frequencies.
Intervals were chosen to break the data into four equal size sets. The bracketed numbers are
p-values for a one sided t test that the difference is greater than 0.
are therefore treated essentially the same by both algorithms leaving fewer low
frequency components to treat differently. Put another way, there are fewer different
ways for mod-ALERGIA to generate a model with many states, and thus less
differentiation from ALERGIA for such a model.
In addition to target model size, the size and variability of the training set affects
the relative performance of the two algorithms. We quantify this amount by calculating
the average node frequency in the de-facto automaton (which is the same as
the sum of all string lengths in the training set divided by the number of states in
the de-facto automaton). Sorting the 2263 data points according to this value and
breaking them into four equal size sets gives the results in Figure 17. As expected,
the modified algorithm does better when less frequency information is available.
these experiments demonstrate that mod-ALERGIA can be used to produce
more models in any given size range; and, even with a completely automatic
procedure and simple default treatment of low frequency components, it can be
used to find probabilistically better models. The advantage of mod-ALERGIA is
greatest for small models and low frequency training sets.
GRAMMAR INFERENCE FOR TEXT 27
daVinci
V1.4.2
Figure
18. An inference result for the Entry element from the OED.
4.2. Particular Examples
This section gives two examples that demonstrate some other advantages of the
stochastic inference approach in general, and of the modified algorithm in particular.
For the first example we use the Entry element from the OED and create an overgeneralized
model to compare with the prototypical entry presented in Figure 1.
ALERGIA does not produce any models for this element in the size range of 2 to
nodes. (Even using a bisection search that narrows the search interval all the
way down to adjacent numbers in a double precision floating point representation.)
In contrast, the mod-ALERGIA search described in the previous section generates
a model for every size in this interval. After inspection of a few of these smaller
models, we found the seven node graph shown in Figure 18 as most similar to the
prototypical entry. This model highlights several interesting characteristics. One
Level Model
Rest of High0011000000111111000000000111111111
Figure
19. The first three nodes of the Monograph element from the CPS data. All clipped
components indicated with scissors are low frequency components.
of the paths (HG VL ET S4*) does correspond to the prototypical entry, but note
some of the semantics that are not present in the non-stochastic description:
ffl The variant form list (VL) is optional and is actually omitted more often than
not.
ffl The etymology (ET) can also be omitted, skipping directly to the senses. Most
often this does not happen.
ffl An element not mentioned in Figure 1, the status (ST), frequently precedes the
headword group (HG) and its presence significantly increases the chance that
the ET and VL will be bypassed. If they are not bypassed, however, a label
(LB) element is normally inserted between them and the HG.
ffl Any number of LBs can also occur in an entry without an ST. Usually, however,
not many occur (the loop probability is only 0.262).
Properties such as these can be extremely useful when it comes to exploring and
understanding the data, even if they are disregarded for more standard grammar
applications such as validating a document instance. Furthermore, the stochastic
properties of the grammar can be used to exercise editorial control when new entries
are introduced into the dictionary: patterns that rarely occur can first trigger a
message to the operator to double-check for correctness; if asserted to be what was
GRAMMAR INFERENCE FOR TEXT 29
intended they can be entered, yet flagged for subsequent review and approval by
higher-level editorial authorities.
In our second example we use the Monograph element from the CPS data to
again comment on the advantage of separating low frequency components (we have
already done this for the PQP example). Figure 19 shows the first three high
frequency nodes of a model for this data. Outgoing low frequency components are
shown clipped. To get a final detailed model, we need to expand and examine the
subtrees of these low frequency components one at a time. For each subtree we
have the option of interactively stepping through positions where it can merge (for
instance the immediate parent, and all other nodes from which it can be parsed),
deciding to change the inference parameters to reclassify part of it as high frequency,
or deciding that it represents an error in the data. This type of interactive correction
is not possible with unmodified ALERGIA.
5. Conclusions and Future Work
This study was concerned with the application of grammatical inference to text
structure, a subject that has been addressed before [1, 2, 3, 10, 14, 38]. Grammar
generation can be an important tool for maintaining a document database system.
It is useful for creating grammars for standard text database purposes, but also
allows a more flexible view. Rather than having a fixed grammar that describes
all possible forms of the data, the grammar is fluid and evolves. Not only does the
grammar change as new data is added, but many different forms of the grammar
can be generated at any time, an over-generalized high-level view or a description
for a subset of the data, for example. Thus we can generate grammars as much to
summarize and understand the organization of the text as to serve in traditional
capacities like a schema.
Our approach adds two things to previous approaches: extension to stochastic
grammatical inference, and an algorithm with greater freedom for interactive tun-
ing. The advantages of changing to stochastic inference are as follows:
ffl Stochastic inference is more effective since it uses frequency information as part
of the inference process. This is true for any learning method.
ffl Stochastic models have richer semantics and are therefore easier to interpret
and interactively adjust. This was demonstrated with the Entry example in Section
4.2. Note that stochastic models can easily be converted to non-stochastic
ones by dropping the probability information. Therefore, we are free to use the
algorithm just as a more effective method for learning non-stochastic models.
ffl A stochastic inference framework allows parameterization that can be used to
produce different models for a single data set. This flexibility can be used
to search for a single best model, or to explore several models at different
generalization levels. Existing non-stochastic approaches to this problem all
work as black boxes producing a single, un-tunable result.
The additional tunability of the modified algorithm was shown to be useful in
two ways: an experimental evaluation using four different texts, and two examples
using specific elements from those texts.
Possibilities exist for further improvement of the algorithm. For example, the
state merging paradigm for learning finite automata has seen some development
since ALERGIA was first published. In particular, a control strategy that compares
and merges nodes in a non-fixed order has been developed [29]. This gives
more freedom to merge nodes in order of the evidence supporting the merges. Incorporating
it into our algorithm would be straightforward, especially in view of
the fact that it is trivial to convert the result of a statistical test to an evidence
measure. Another improvement would be to develop evidence measures to assist
the user in choosing merge destinations for low frequency components. Possible
starting points were mentioned in Section 3.4.
Much future work exists integrating the method into a system to support traditional
applications. For example, the semi-structured database system Lore [31]
does generate schemas for use in query planning and optimization but performs no
generalization, effectively stopping at the prefix tree. The schemas are therefore
not necessarily compact or understandable.
In addition to traditional applications, the stochastic part of the grammar also
suggests many novel applications. For example, a system could be constructed to
assist authors in the creation of documents by flagging excessively rare structures in
the process of their creation, or listing possible next elements of partially complete
entries in order of their probability. Stochastic grammars could also be used as
structural classifiers by characterizing the authoring styles of two or more people
who use the same tag set.
Acknowledgments
Financial assistance from the Natural Sciences and Engineering Research Council
of Canada through a postgraduate scholarship, the Information Technology Re-search
Center (now, Communications and Information Technology Ontario), and
the University of Waterloo is gratefully acknowledged.
--R
Generating Grammars for Structured Documents Using Grammatical Inference Methods.
Forming grammars for structured docu- ments: An application of grammatical inference
Generating grammars for SGML tagged texts lacking DTD.
Structured Documents.
The research potential of the electronic OED2 database at the University of Waterloo: a guide for scholars.
A Guide to the Oxford English Dictionary: The essential companion and user's guide.
Extensible markup language (xml) 1.0.
Learning stochastic regular grammars by means of a state merging method.
Grammar generation and query processing for text databases
Finite state transduction tools.
Markup systems and the future of scholarly text processing.
daVinci 1.4 User Manual
Syntactic Pattern Recognition and Applications.
Enabling query formulation and optimization in semistructured databases.
Mind your grammar: a new approach to modelling text.
Probability inequalities for sums of bounded random variables.
Concept learning and the problem of small disjuncts.
Introduction to Automata Theory
Oxford Advanced Learner's Dictionary of Current English (Fourth Edition).
Hidden Markov Models for Speech Recognition.
Structuring the text of the Oxford English Dictionary through finite state transduction.
A structured document database system.
Krogh, editor. Compendium of Pharmaceuticals and Specialties
Regular right part grammars and their parsers.
Results of the Abbadingo one DFA learning competition and a new evidence driven state merging algorithm.
The estimation of stochastic context-free grammars using the inside-outside algorithm
A database management system for semistructured data.
Wrapper induction for semistructured
Inferring regular languages in polynomial updated time.
Oxford University Press.
Visualizing text.
On the learnability and usage of acyclic probabilistic automata.
Statistical inductive learning of regular formal languages.
creating DTDs via the GB-engine and Fred
The Oxford English Dictionary
Inducing probabilistic grammars by Bayesian model merging.
Grammatical inference: An introductory survey.
Application of a stochastic grammatical inference method to text struc- ture
--TR
Markup systems and the future of scholarly text processing
On the learnability and usage of acyclic probabilistic finite automata
Lore
Regular right part grammars and their parsers
Introduction To Automata Theory, Languages, And Computation
Hidden Markov Models for Speech Recognition
Grammatical Inference
Learning Stochastic Regular Grammars by Means of a State Merging Method
Forming Grammars for Structured Documents
Inducing Probabilistic Grammars by Bayesian Model Merging
DataGuides
Mind Your Grammar
Statistical Inductive Learning of Regular Formal Languages
--CTR
Robert H. Warren , Frank Wm. Tompa, Multi-column substring matching for database schema translation, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Minos Garofalakis , Aristides Gionis , Rajeev Rastogi , S. Seshadri , Kyuseok Shim, XTRACT: Learning Document Type Descriptors from XML Document Collections, Data Mining and Knowledge Discovery, v.7 n.1, p.23-56, January
Enrique Vidal , Franck Thollard , Colin de la Higuera , Francisco Casacuberta , Rafael C. Carrasco, Probabilistic Finite-State Machines-Part I, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1013-1025, July 2005
Enrique Vidal , Frank Thollard , Colin de la Higuera , Francisco Casacuberta , Rafael C. Carrasco, Probabilistic Finite-State Machines-Part II, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1026-1039, July 2005 | text database structure;stochastic grammatical inference |